47 resultados para Explanatory Combinatorial Lexicology
em Helda - Digital Repository of University of Helsinki
Resumo:
This thesis describes current and past n-in-one methods and presents three early experimental studies using mass spectrometry and the triple quadrupole instrument on the application of n-in-one in drug discovery. N-in-one strategy pools and mix samples in drug discovery prior to measurement or analysis. This allows the most promising compounds to be rapidly identified and then analysed. Nowadays properties of drugs are characterised earlier and in parallel with pharmacological efficacy. Studies presented here use in vitro methods as caco-2 cells and immobilized artificial membrane chromatography for drug absorption and lipophilicity measurements. The high sensitivity and selectivity of liquid chromatography mass spectrometry are especially important for new analytical methods using n-in-one. In the first study, the fragmentation patterns of ten nitrophenoxy benzoate compounds, serial homology, were characterised and the presence of the compounds was determined in a combinatorial library. The influence of one or two nitro substituents and the alkyl chain length of methyl to pentyl on collision-induced fragmentation was studied, and interesting structurefragmentation relationships were detected. Two nitro group compounds increased fragmentation compared to one nitro group, whereas less fragmentation was noted in molecules with a longer alkyl chain. The most abundant product ions were nitrophenoxy ions, which were also tested in the precursor ion screening of the combinatorial library. In the second study, the immobilized artificial membrane chromatographic method was transferred from ultraviolet detection to mass spectrometric analysis and a new method was developed. Mass spectra were scanned and the chromatographic retention of compounds was analysed using extract ion chromatograms. When changing detectors and buffers and including n-in-one in the method, the results showed good correlation. Finally, the results demonstrated that mass spectrometric detection with gradient elution can provide a rapid and convenient n-in-one method for ranking the lipophilic properties of several structurally diverse compounds simultaneously. In the final study, a new method was developed for caco-2 samples. Compounds were separated by liquid chromatography and quantified by selected reaction monitoring using mass spectrometry. This method was used for caco-2 samples, where absorption of ten chemically and physiologically different compounds was screened using both single and nin- one approaches. These three studies used mass spectrometry for compound identification, method transfer and quantitation in the area of mixture analysis. Different mass spectrometric scanning modes for the triple quadrupole instrument were used in each method. Early drug discovery with n-in-one is area where mass spectrometric analysis, its possibilities and proper use, is especially important.
Resumo:
According to certain arguments, computation is observer-relative either in the sense that many physical systems implement many computations (Hilary Putnam), or in the sense that almost all physical systems implement all computations (John Searle). If sound, these arguments have a potentially devastating consequence for the computational theory of mind: if arbitrary physical systems can be seen to implement arbitrary computations, the notion of computation seems to lose all explanatory power as far as brains and minds are concerned. David Chalmers and B. Jack Copeland have attempted to counter these relativist arguments by placing certain constraints on the definition of implementation. In this thesis, I examine their proposals and find both wanting in some respects. During the course of this examination, I give a formal definition of the class of combinatorial-state automata , upon which Chalmers s account of implementation is based. I show that this definition implies two theorems (one an observation due to Curtis Brown) concerning the computational power of combinatorial-state automata, theorems which speak against founding the theory of implementation upon this formalism. Toward the end of the thesis, I sketch a definition of the implementation of Turing machines in dynamical systems, and offer this as an alternative to Chalmers s and Copeland s accounts of implementation. I demonstrate that the definition does not imply Searle s claim for the universal implementation of computations. However, the definition may support claims that are weaker than Searle s, yet still troubling to the computationalist. There remains a kernel of relativity in implementation at any rate, since the interpretation of physical systems seems itself to be an observer-relative matter, to some degree at least. This observation helps clarify the role the notion of computation can play in cognitive science. Specifically, I will argue that the notion should be conceived as an instrumental rather than as a fundamental or foundational one.
Resumo:
Poor pharmacokinetics is one of the reasons for the withdrawal of drug candidates from clinical trials. There is an urgent need for investigating in vitro ADME (absorption, distribution, metabolism and excretion) properties and recognising unsuitable drug candidates as early as possible in the drug development process. Current throughput of in vitro ADME profiling is insufficient because effective new synthesis techniques, such as drug design in silico and combinatorial synthesis, have vastly increased the number of drug candidates. Assay technologies for larger sets of compounds than are currently feasible are critically needed. The first part of this work focused on the evaluation of cocktail strategy in studies of drug permeability and metabolic stability. N-in-one liquid chromatography-tandem mass spectrometry (LC/MS/MS) methods were developed and validated for the multiple component analysis of samples in cocktail experiments. Together, cocktail dosing and LC/MS/MS were found to form an effective tool for increasing throughput. First, cocktail dosing, i.e. the use of a mixture of many test compounds, was applied in permeability experiments with Caco-2 cell culture, which is a widely used in vitro model for small intestinal absorption. A cocktail of 7-10 reference compounds was successfully evaluated for standardization and routine testing of the performance of Caco-2 cell cultures. Secondly, cocktail strategy was used in metabolic stability studies of drugs with UGT isoenzymes, which are one of the most important phase II drug metabolizing enzymes. The study confirmed that the determination of intrinsic clearance (Clint) as a cocktail of seven substrates is possible. The LC/MS/MS methods that were developed were fast and reliable for the quantitative analysis of a heterogenous set of drugs from Caco-2 permeability experiments and the set of glucuronides from in vitro stability experiments. The performance of a new ionization technique, atmospheric pressure photoionization (APPI), was evaluated through comparison with electrospray ionization (ESI), where both techniques were used for the analysis of Caco-2 samples. Like ESI, also APPI proved to be a reliable technique for the analysis of Caco-2 samples and even more flexible than ESI because of the wider dynamic linear range. The second part of the experimental study focused on metabolite profiling. Different mass spectrometric instruments and commercially available software tools were investigated for profiling metabolites in urine and hepatocyte samples. All the instruments tested (triple quadrupole, quadrupole time-of-flight, ion trap) exhibited some good and some bad features in searching for and identifying of expected and non-expected metabolites. Although, current profiling software is helpful, it is still insufficient. Thus a time-consuming largely manual approach is still required for metabolite profiling from complex biological matrices.
Resumo:
Bertrand Russell (1872 1970) introduced the English-speaking philosophical world to modern, mathematical logic and foundational study of mathematics. The present study concerns the conception of logic that underlies his early logicist philosophy of mathematics, formulated in The Principles of Mathematics (1903). In 1967, Jean van Heijenoort published a paper, Logic as Language and Logic as Calculus, in which he argued that the early development of modern logic (roughly the period 1879 1930) can be understood, when considered in the light of a distinction between two essentially different perspectives on logic. According to the view of logic as language, logic constitutes the general framework for all rational discourse, or meaningful use of language, whereas the conception of logic as calculus regards logic more as a symbolism which is subject to reinterpretation. The calculus-view paves the way for systematic metatheory, where logic itself becomes a subject of mathematical study (model-theory). Several scholars have interpreted Russell s views on logic with the help of the interpretative tool introduced by van Heijenoort,. They have commonly argued that Russell s is a clear-cut case of the view of logic as language. In the present study a detailed reconstruction of the view and its implications is provided, and it is argued that the interpretation is seriously misleading as to what he really thought about logic. I argue that Russell s conception is best understood by setting it in its proper philosophical context. This is constituted by Immanuel Kant s theory of mathematics. Kant had argued that purely conceptual thought basically, the logical forms recognised in Aristotelian logic cannot capture the content of mathematical judgments and reasonings. Mathematical cognition is not grounded in logic but in space and time as the pure forms of intuition. As against this view, Russell argued that once logic is developed into a proper tool which can be applied to mathematical theories, Kant s views turn out to be completely wrong. In the present work the view is defended that Russell s logicist philosophy of mathematics, or the view that mathematics is really only logic, is based on what I term the Bolzanian account of logic . According to this conception, (i) the distinction between form and content is not explanatory in logic; (ii) the propositions of logic have genuine content; (iii) this content is conferred upon them by special entities, logical constants . The Bolzanian account, it is argued, is both historically important and throws genuine light on Russell s conception of logic.
Resumo:
The aim of this dissertation is to provide conceptual tools for the social scientist for clarifying, evaluating and comparing explanations of social phenomena based on formal mathematical models. The focus is on relatively simple theoretical models and simulations, not statistical models. These studies apply a theory of explanation according to which explanation is about tracing objective relations of dependence, knowledge of which enables answers to contrastive why and how-questions. This theory is developed further by delineating criteria for evaluating competing explanations and by applying the theory to social scientific modelling practices and to the key concepts of equilibrium and mechanism. The dissertation is comprised of an introductory essay and six published original research articles. The main theses about model-based explanations in the social sciences argued for in the articles are the following. 1) The concept of explanatory power, often used to argue for the superiority of one explanation over another, compasses five dimensions which are partially independent and involve some systematic trade-offs. 2) All equilibrium explanations do not causally explain the obtaining of the end equilibrium state with the multiple possible initial states. Instead, they often constitutively explain the macro property of the system with the micro properties of the parts (together with their organization). 3) There is an important ambivalence in the concept mechanism used in many model-based explanations and this difference corresponds to a difference between two alternative research heuristics. 4) Whether unrealistic assumptions in a model (such as a rational choice model) are detrimental to an explanation provided by the model depends on whether the representation of the explanatory dependency in the model is itself dependent on the particular unrealistic assumptions. Thus evaluating whether a literally false assumption in a model is problematic requires specifying exactly what is supposed to be explained and by what. 5) The question of whether an explanatory relationship depends on particular false assumptions can be explored with the process of derivational robustness analysis and the importance of robustness analysis accounts for some of the puzzling features of the tradition of model-building in economics. 6) The fact that economists have been relatively reluctant to use true agent-based simulations to formulate explanations can partially be explained by the specific ideal of scientific understanding implicit in the practise of orthodox economics.
Resumo:
The development of botanical Finnish: Elias Lönnrot as the creator of new terminology In the 19th century the Finnish language was intentionally developed to meet the demands of civilised society and Finnish-language science. The development of the language involved several people from different fields of science. This study examines this enormous project in the field of botany. By which methods were scientific terms formed, and for which reasons were those terms used? Why has a certain word been chosen to represent a particular concept? The material of this study is the terminology of plant morphology in Finnish that Elias Lönnrot developed in the middle of the 19th century. The terms of plant morphology denote and describe the parts of the plant and the relationships between those parts. For instance, the terms emi pistill , hede stamen , terälehti petal and verholehti sepal , which are nowadays familiar in the general language, were used for the first time in Lönnrot s texts. The study integrates the methods of lexicology and terminology. In lexicology, the word and its various meanings serve as the focus, whereas the theory of terminology focuses on the concept and concept systems. A new, consciously developed terminology can be understood through the old, familiar vocabulary and structures as well as through the new, logical term system. Lönnrot s botanical terminology can be divided into three groups depending on their origin: 1) 19% of all terms have been accepted from the existing vocabulary and used in their original meanings, 2) 11% of all terms have been chosen from the existing vocabulary and used in the new, specific botanical meanings, and 3) 70% of all terms have been created on the basis of the existing vocabulary and used in the new, specific botanical meanings. Therefore, the study reveals that domestic materials primarily morphosemantic neologisms form the Finnish terminology of plant morphology. Characteristic of Lönnrot s botanical terms is the utilisation of the vocabulary of various Finnish dialects and particular repeating elements. Repeating elements include, for example, the prefixes that come from botanical Latin or Swedish as well as the particular Finnish derivation types. Such structures form term systems that reflect scientific concept systems. Two thirds of the created new words are formed loosely or precisely according to either Latin or Swedish terms; one third is formed completely differently from its equivalents in the foreign languages. Approximately half of the chosen terms are formed differently from the Latin and Swedish terms. It is worth noting that many loan translations use rare vocabulary from Finnish dialects as equivalents to foreign parts of terms. Lönnrot aimed to inspire scientific terminology with Finland s own language, thus making scientific text accessible to the Finnish agricultural population.
Resumo:
The historical development of Finnish nursing textbooks from the late 1880s to 1967: the training of nurses in the Foucauldian perspective. This study aims, first, to analyse the historical development of Finnish nursing textbooks in the training of nurses and in nursing education: what Foucauldian power processes operate in the writing and publishing processes? What picture of nursing did early nursing books portray and who were the decision makers? Second, this study also aims to analyse the processes of power in nurse training processes. The time frame extends from the early stages of nurse training in the late 1880s to 1967. This present study is a part of textbook research and of the history of professional education in Finland. This study seeks to explain how, who or what contributed the power processes involved in the writing of nursing textbooks and through textbooks. Did someone use these books as a tool to influence nursing education? The third aim of this study is to define and analyse the purpose of nurse training. Michel Foucault´s concept of power served as an explanatory framework for this study. A very central part of power is the assembling of data, the supplying of information and messages, and the creation of discourses. When applied to the training of nurses, power dictates what information is taught in the training and contained in the books. Thus, the textbook holds an influential position as a power user in these processes. Other processes in which such power is exercised include school discipline and all other normalizing processes. One of most powerful ways of adapting is the hall of residence, where nursing pupils were required to live. Trained nurses desired to separate themselves from their untrained predecessors and from those with less training by wearing different uniforms and living in separate housing units. The state supported the registration of trained nurses by legislation. With this decision the state made it illegal to work as a nurse without an authorised education, and use these regulations to limit and confirm the professional knowledge and power of nurses. Nurses, physicians and government authorities used textbooks in nursing education as tools to achieve their own purposes and principles. With these books all three groups attempted to confirm their own professional power and knowledge while at the same time limit the power and expertise of others. Public authorities sought to unify the training of nurses and the basis of knowledge in all nursing schools in Finland with similar and obligatory textbooks. This standardisation started 20 years before the government unified nursing training in 1930. The textbooks also served as data assemblers in unifying nursing practices in Finnish hospitals, because the Medical Board required all training hospitals to attach the textbooks to units with nursing pupils. For the nurses, and especially for the associations of Finnish nurses, making and publishing their own textbooks for the training of nurses was a part of their professional projects. With these textbooks, the nursing elite and the teachers tended to prepare nursing pupils’ identities for nursing’s very special mission. From the 1960s, nursing was no longer understood as a mission, but as a normal vocation. Nurses and doctors disputed this view throughout the period studied, which was the optimal relationship between theory and practice in nursing textbooks and in nurse education. The discussion of medical knowledge in nursing textbooks took place in the 1930s and 1940s. Nurses were very confused about their own professional knowledge and expertise, which explains why they could not create a new nursing textbook despite the urgency. A brand new nursing textbook was published in 1967, about 30 years after the predecessor. Keyword: nurse, nurse training, nursing education, power, textbook, Michel Foucault
Resumo:
Juvenile idiopathic arthritis (JIA) is a severe childhood disease usually characterized by long-term morbidity, unpredictable course, pain, and limitations in daily activities and social participation. The disease affects not only the child but also the whole family. The family is expected to adhere to an often very laborious regimen over a long period of time. However, the parental role is incoherently conceptualized in the research field. Pain in JIA is of somatic origin, but psychosocial factors, such as mood and self-efficacy, are critical in the perception of pain and in its impact on functioning. This study examined the factors correlating and possibly explaining pain in JIA, with a special emphasis on the mutual relations between parent- and patient-driven variables. In this patient series pain was not associated with the disease activity. The degree of pain was on average fairly low in children with JIA. When the children were clustered according to age, anxiety and depression, four distinguishable cluster groups significantly associated with pain emerged. One of the groups was described by concept vulnerability because of unfavorable variable associations. Parental depressive and anxiety symptoms accompanied by illness management had a predictive power in discriminating groups of children with varying distress levels. The parent’s and child’s perception of a child’s functional capability, distress, and somatic self-efficacy had independent explanatory power predicting the child’s pain. Of special interest in the current study was self-efficacy, which refers to the belief of an individual that he/she has the ability to engage in the behavior required for tackling the disease. In children with JIA, strong self-efficacy was related to lower levels of pain, depressive symptoms and trait anxiety. This suggests strengthening a child’s sense of self-efficacy, when helping the child to cope with his or her disease. Pain experienced by a child with JIA needs to be viewed in a multidimensional bio-psycho-social context that covers biological, environmental and cognitive behavioral mechanisms. The relations between the parent-child variables are complex and affect pain both directly and indirectly. Developing pain-treatment modalities that recognize the family as a system is also warranted.
Resumo:
The thesis aims to link the biolinguistic research program and the results of studies in comceptual combination from cognitive psychology. The thesis derives a theory of syntactic structure of noun and adjectival compounds from the Empty Lexicon Hypothesis. Two compound-forming operations are described: root-compounding and word-compounding. The aptness of theory is tested with finnish and greek compounds. From the syntactic theory semantic requirements for conceptual system are derived, especially requirements for handling morphosyntactic features. These requirements are compared to three formidable theories of conceptual combination: relation theory CARIN, Dual-Process theory and C3-theory. The claims of explanatory power of relational distributions of modifier in CARIN-theory ared discarded, as the method for sampling and building relational distributions is not reliable and the algorithmic instantiation of theory does not compute what it claims to compute. From relational theory there still remains results supporting existence of 'easy' relations for certain concepts. Dual-Process theory is found to provide results that cannot in theory be affected by linguistic system, but the basic idea of property compounds is kept. C3-theory is found to be not computationally realistic, but the basic results of diagnosticity and local properties (domains) of conceptual system are solid. The three conceptual combination models are rethought as a problem of finding the shortest route between the two concepts. The new basis for modeling is suggested to be bare conceptual landscape with morphosyntactiic or semantic features working as guidance and structural features of landscape basically unknown, but such as they react to features from linguistic system. Minimalistic principles to conceptual modeling are suggested.
Resumo:
Is oral health becoming a part of the global health culture? Oral health seems to turn out to be part of the global health culture, according to the findings of a thesis-research, Institute of Dentistry, University of Helsinki. The thesis is entitled as “Preadolescents and Their Mothers as Oral Health-Promoting Actors: Non-biologic Determinants of Oral Health among Turkish and Finnish Preadolescents.” The research was supervised by Prof.Murtomaa and led by Dr.A.Basak Cinar. It was conducted as a cross-sectional study of 611 Turkish and 223 Finnish school preadolescents in Istanbul and Helsinki, from the fourth, fifth, and sixth grades, aged 10 to 12, based on self-administered and pre-tested health behavior questionnaires for them and their mothers as well as the youth’s oral health records. Clinically assessed dental status (DMFT) and self-reported oral health of Turkish preadolescents was significantly poorer than the Finns`. A similar association occurred for well-being measures (height and weight, self-esteem), but not for school performance. Turkish preadolescents were more dentally anxious and reported lower mean values of toothbrushing self-efficacy and dietary self-efficacy than did Finns. The Turks less frequently reported recommended oral health behaviors (twice daily or more toothbrushing, sweet consumption on 2 days or less/week, decreased between-meal sweet consumption) than did the Finns. Turkish mothers reported less frequently dental health as being above average and recommended oral health behaviors as well as regular dental visits. Their mean values for dental anxiety was higher and self-efficacy on implementation of twice-daily toothbrushing were lower than those of the Finnish. Despite these differences between the Turks and Finns, the associations found in common for all preadolescents, regardless of cultural differences and different oral health care systems, assessed for the first time in a holistic framework, were as follows: There seems to be interrelation between oral health and general-well being (body height-weight measures, school performance, and self-esteem) among preadolescents: • The body height was an explanatory factor for dental health, underlining the possible common life-course factors for dental health and general well-being. • Better school performance, high levels of self-esteem and self-efficacy were interrelated and they contributed to good oral health. • Good school performance was a common predictor for twice-daily toothbrushing. Self-efficacy and maternal modelling have significant role for maintenance and improvement of both oral- and general health- related behaviors. In addition, there is need for integration of self-efficacy based approaches to promote better oral health. • All preadolescents with high levels of self-efficacy were more likely to report more frequent twice-daily toothbrushing and less frequent sweet consumption. • All preadolescents were likely to imitate toothbrushing and sweet consumption behaviors of their mothers. • High levels of self-efficacy contributed to low dental anxiety in various patterns in both groups. As a conclusion: • Many health-detrimental behaviors arise from the school age years and are unlikely to change later. Schools have powerful influences on children’s development and well-being. Therefore, oral health promotion in schools should be integrated into general health promotion, school curricula, and other activities. • Health promotion messages should be reinforced in schools, enabling children and their families to develop lifelong sustainable positive health-related skills (self-esteem, self-efficacy) and behaviors. • Placing more emphasis on behavioral sciences, preventive approaches, and community-based education during undergraduate studies should encourage social responsibility and health-promoting roles among dentists. Attempts to increase general well-being and to reduce oral health inequalities among preadolescents will remain unsuccessful if the individual factors, as well as maternal and societal influences, are not considered by psycho-social holistic approaches.
Resumo:
Socioeconomic health inequalities have been widely documented, with a lower social position being associated with poorer physical and general health and higher mortality. For mental health the results have been more varied. However, the mechanisms by which the various dimensions of socioeconomic circumstances are associated with different domains of health are not yet fully understood. This is related to a lack of studies tackling the interrelations and pathways between multiple dimensions of socioeconomic circumstances and domains of health. In particular, evidence from comparative studies of populations from different national contexts that consider the complexity of the causes of socioeconomic health inequalities is needed. The aim of this study was to examine the associations of multiple socioeconomic circumstances with physical and mental health, more specifically physical functioning and common mental disorders. This was done in a comparative setting of two cohorts of white-collar public sector employees, one from Finland and one from Britain. The study also sought to find explanations for the observed associations between economic difficulties and health by analysing the contribution of health behaviours, living arrangements and work-family conflicts. The survey data were derived from the Finnish Helsinki Health Study baseline surveys in 2000-2002 among the City of Helsinki employees aged 40-60 years, and from the fifth phase of the London-based Whitehall II study (1997-9) which is a prospective study of civil servants aged 35-55 years at the time of recruitment. The data collection in the two countries was harmonised to safeguard maximal comparability. Physical functioning was measured with the Short Form (SF-36) physical component summary and common mental disorders with the General Health Questionnaire (GHQ-12). Socioeconomic circumstances were parental education, childhood economic difficulties, own education, occupational class, household income, housing tenure, and current economic difficulties. Further explanatory factors were health behaviours, living arrangements and work-family conflicts. The main statistical method used was logistic regression analysis. Analyses were conducted separately for the two sexes and two cohorts. Childhood and current economic difficulties were associated with poorer physical functioning and common mental disorders generally in both cohorts and sexes. Conventional dimensions of socioeconomic circumstances i.e. education, occupational class and income were associated with physical functioning and mediated each other’s effects, but in different ways in the two cohorts: education was more important in Helsinki and occupational class in London. The associations of economic difficulties with health were partly explained by work-family conflicts and other socioeconomic circumstances in both cohorts and sexes. In conclusion, this study on two country-specific cohorts confirms that different dimensions of socioeconomic circumstances are related but not interchangeable. They are also somewhat differently associated with physical and mental domains of health. In addition to conventionally measured dimensions of past and present socioeconomic circumstances, economic difficulties should be taken into account in studies and attempts to reduce health inequalities. Further explanatory factors, particularly conflicts between work and family, should also be considered when aiming to reduce inequalities and maintain the health of employees.
Resumo:
This research discusses decoupling CAP (Common Agricultural Policy) support and impacts which may occur on grain cultivation area and supply of beef and pork in Finland. The study presents the definitions and studies on decoupled agricultural subsidies, the development of supply of grain, beef and pork in Finland and changes in leading factors affecting supply between 1970 and 2005. Decoupling agricultural subsidies means that the linkage between subsidies and production levels is disconnected; subsidies do not affect the amount produced. The hypothesis is that decoupling will decrease the amounts produced in agriculture substantially. In the supply research, the econometric models which represent supply of agricultural products are estimated based on the data of prices and amounts produced. With estimated supply models, the impacts of changes in prices and public policies, can be forecasted according to supply of agricultural products. In this study, three regression models describing combined cultivation areas of rye, wheat, oats and barley, and the supply of beef and pork are estimated. Grain cultivation area and supply of beef are estimated based on data from 1970 to 2005 and supply of pork on data from 1995 to 2005. The dependencies in the model are postulated to be linear. The explanatory variables in the grain model were average return per hectare, agricultural subsidies, grain cultivation area in the previous year and the cost of fertilization. The explanatory variables in the beef model were the total return from markets and subsidies and the amount of beef production in the previous year. In the pork model the explanatory variables were the total return, the price of piglet, investment subsidies, trend of increasing productivity and the dummy variable of the last quarter of the year. The R-squared of model of grain cultivation area was 0,81, the model of beef supply 0,77 and the model of pork supply 0,82. Development of grain cultivation area and supply of beef and pork was estimated for 2006 - 2013 with this regression model. In the basic scenario, development of explanatory variables in 2006 - 2013 was postulated to be the same as they used to be in average in 1995 - 2005. After the basic scenario the impacts of decoupling CAP subsidies and domestic subsidies on cultivation area and supply were simulated. According to the results of the decoupling CAP subsidies scenario, grain cultivation area decreases from 1,12 million hectares in 2005 to 1,0 million hectares in 2013 and supply of beef from 88,8 million kilos in 2005 to 67,7 million kilos in 2013. Decoupling domestic and investment subsidies will decrease the supply of pork from 194 million kilos in 2005 to 187 million kilos in 2006. By 2013 the supply of pork grows into 203 million kilos.
Resumo:
Polyethene, polyacrylates and polymethyl acrylates are versatile materials that find wide variety of applications in several areas. Therefore, polymerization of ethene, acrylates and methacrylates has achieved a lot attention during past years. Numbers of metal catalysts have been introduced in order to control the polymerization and to produce tailored polymer structures. Herein an overview on the possible polymerization pathways for ethene, acrylates and methacrylates is presented. In this thesis iron(II) and cobalt(II) complexes bearing tri- and tetradentate nitrogen ligands were synthesized and studied in the polymerization of tertbutyl acrylate (tBA) and methyl methacrylate (MMA). Complexes are activated with methylaluminoxane (MAO) before they form active combinations for polymerization reactions. The effect of reaction conditions, i.e. monomer concentration, reaction time, temperature, MAO to metal ratio, on activity and polymer properties were investigated. The described polymerization system enables mild reaction conditions, the possibility to tailor molar mass of the produced polymers and provides good control over the polymerization. Moreover, the polymerization of MMA in the presence of iron(II) complex with tetradentate nitrogen ligands under conditions of atom transfer radical polymerization (ATRP) was studied. Several manganese(II) complexes were studied in the ethene polymerization with combinatorial methods and new active catalysts were found. These complexes were also studied in acrylate and methacrylate polymerizations after MAO activation and converted into the corresponding alkyl (methyl or benzyl) derivatives. Combinatorial methods were introduced to discover aluminum alkyl complexes for the polymerization of acrylates and methacrylates. Various combinations of aluminum alkyls and ligands, including phosphines, salicylaldimines and nitrogen donor ligands, were prepared in situ and utilized to initiate the polymerization of tBA. Phosphine ligands were found to be the most active and the polymerization MMA was studied with these active combinations. In addition, a plausible polymerization mechanism for MMA based on ESI-MS, 1H and 13C NMR is proposed.
Resumo:
Determination of the environmental factors controlling earth surface processes and landform patterns is one of the central themes in physical geography. However, the identification of the main drivers of the geomorphological phenomena is often challenging. Novel spatial analysis and modelling methods could provide new insights into the process-environment relationships. The objective of this research was to map and quantitatively analyse the occurrence of cryogenic phenomena in subarctic Finland. More precisely, utilising a grid-based approach the distribution and abundance of periglacial landforms were modelled to identify important landscape scale environmental factors. The study was performed using a comprehensive empirical data set of periglacial landforms from an area of 600 km2 at a 25-ha resolution. The utilised statistical methods were generalized linear modelling (GLM) and hierarchical partitioning (HP). GLMs were used to produce distribution and abundance models and HP to reveal independently the most likely causal variables. The GLM models were assessed utilising statistical evaluation measures, prediction maps, field observations and the results of HP analyses. A total of 40 different landform types and subtypes were identified. Topographical, soil property and vegetation variables were the primary correlates for the occurrence and cover of active periglacial landforms on the landscape scale. In the model evaluation, most of the GLMs were shown to be robust although the explanation power, prediction ability as well as the selected explanatory variables varied between the models. The great potential of the combination of a spatial grid system, terrain data and novel statistical techniques to map the occurrence of periglacial landforms was demonstrated in this study. GLM proved to be a useful modelling framework for testing the shapes of the response functions and significances of the environmental variables and the HP method helped to make better deductions of the important factors of earth surface processes. Hence, the numerical approach presented in this study can be a useful addition to the current range of techniques available to researchers to map and monitor different geographical phenomena.
Resumo:
This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.