130 resultados para emphasis
Resumo:
Basidiomycetous white-rot fungi are the only organisms that can efficiently decompose all the components of wood. Moreover, white-rot fungi possess the ability to mineralize recalcitrant lignin polymer with their extracellular, oxidative lignin-modifying enzymes (LMEs), i.e. laccase, lignin peroxidase (LiP), manganese peroxidase (MnP), and versatile peroxidase (VP). Within one white-rot fungal species LMEs are typically present as several isozymes encoded by multiple genes. This study focused on two effi cient lignin-degrading white-rot fungal species, Phlebia radiata and Dichomitus squalens. Molecular level knowledge of the LMEs of the Finnish isolate P. radiata FBCC43 (79, ATCC 64658) was complemented with cloning and characterization of a new laccase (Pr-lac2), two new LiP-encoding genes (Pr-lip1, Pr-lip4), and Pr-lip3 gene that has been previously described only at cDNAlevel. Also, two laccase-encoding genes (Ds-lac3, Ds-lac4) of D. squalens were cloned and characterized for the first time. Phylogenetic analysis revealed close evolutionary relationships between the P. radiata LiP isozymes. Distinct protein phylogeny for both P. radiata and D. squalens laccases suggested different physiological functions for the corresponding enzymes. Supplementation of P. radiata liquid culture medium with excess Cu2+ notably increased laccase activity and good fungal growth was achieved in complex medium rich with organic nitrogen. Wood is the natural substrate of lignin-degrading white-rot fungi, supporting production of enzymes and metabolites needed for fungal growth and the breakdown of lignocellulose. In this work, emphasis was on solid-state wood or wood-containing cultures that mimic the natural growth conditions of white-rot fungi. Transcript analyses showed that wood promoted expression of all the presently known LME-encoding genes of P. radiata and laccase-encoding genes of D. squalens. Expression of the studied individual LME-encoding genes of P. radiata and D. squalens was unequal in transcript quantities and apparently time-dependent, thus suggesting the importance of several distinct LMEs within one fungal species. In addition to LMEs, white-rot fungi secrete other compounds that are important in decomposition of wood and lignin. One of these compounds is oxalic acid, which is a common metabolite of wood-rotting fungi. Fungi produce also oxalic-acid degrading enzymes of which the most widespread is oxalate decarboxylase (ODC). However, the role of ODC in fungi is still ambiguous with propositions from regulation of intra and extracellular oxalic acid levels to a function in primary growth and concomitant production of ATP. In this study, intracellular ODC activity was detected in four white-rot fungal species, and D. squalens showed the highest ODC activity upon exposure to oxalic acid. Oxalic acid was the most common organic acid secreted by the ODC-positive white-rot fungi and the only organic acid detected in wood cultures. The ODC-encoding gene Ds-odc was cloned from two strains of D. squalens showing the first characterization of an odc-gene from a white-rot polypore species. Biochemical properties of the D. squalens ODC resembled those described for other basidiomycete ODCs. However, the translated amino acid sequence of Ds-odc has a novel N-terminal primary structure with a repetitive Ala-Ser-rich region of ca 60 amino acid residues in length. Expression of the Ds-odc transcripts suggested a constitutive metabolic role for the corresponding ODC enzyme. According to the results, it is proposed that ODC may have an essential implication for the growth and basic metabolism of wood-decaying fungi.
Resumo:
Composting is the biological conversion of solid organic waste into usable end products such as fertilizers, substrates for mushroom production and biogas. Although composts are highly variable in their bulk composition, composting material is generally based on lignocellulose compounds derived from agricultural, forestry, fruit and vegetable processing, household and municipal wastes. Lignocellulose is very recalcitrant; however it is rich and abundant source of carbon and energy. Therefore lignocellulose degradation is essential for maintaining the global carbon cycle. In compost, the active component involved in the biodegradation and conversion processes is the resident microbial population, among which microfungi play a very important role. In composting pile the warm, humid, and aerobic environment provides the optimal conditions for their development. Microfungi use many carbon sources, including lignocellulosic polymers and can survive in extreme conditions. Typically microfungi are responsible for compost maturation. In order to improve the composting process, more information is needed about the microbial degradation process. Better knowledge on the lignocellulose degradation by microfungi could be used to optimize the composting process. Thus, this thesis focused on lignocellulose and humic compounds degradation by a microfungus Paecilomyces inflatus, which belongs to a flora of common microbial compost, soil and decaying plant remains. It is a very common species in Europe, North America and Asia. The lignocellulose and humic compounds degradation was studied using several methods including measurements of carbon release from 14C-labelled compounds, such as synthetic lignin (dehydrogenative polymer, DHP) and humic acids, as well as by determination of fibre composition using chemical detergents and sulphuric acid. Spectrophotometric enzyme assays were conducted to detect extracellular lignocellulose-degrading hydrolytic and oxidative enzymes. Paecilomyces inflatus secreted clearly extracellular laccase to the culture media. Laccase was involved in the degradation process of lignin and humic acids. In compost P. inflatus mineralised 6-10% of 14C-labelled DHP into carbon dioxide. About 15% of labelled DHP was converted into water-soluble compounds. Also humic acids were partly mineralised and converted into water-soluble material, such as low-molecular mass fulvic acid-like compounds. Although laccase activity in aromatics-rich compost media clearly is connected with the degradation process of lignin and lignin-like compounds, it may preferentially effect the polymerisation and/or detoxification of such aromatic compounds. P. inflatus can degrade lignin and carbohydrates also while growing in straw and in wood. The cellulolytic enzyme system includes endoglucanase and β-glucosidase. In P. inflatus the secretion of these enzymes was stimulated by low-molecular-weight aromatics, such as soil humic acid and veratric acid. When strains of P. inflatus from different ecophysiological origins were compared, indications were found that specific adaptation strategies needed for lignocellulosics degradation may operate in P. inflatus. The degradative features of these microfungi are on relevance for lignocellulose decomposition in nature, especially in soil and compost environments, where basidiomycetes are not established. The results of this study may help to understand, control and better design the process of plant polymer conversion in compost environment, with a special emphasis on the role of ubiquitous microfungi.
Resumo:
Gravitaation kvanttiteorian muotoilu on ollut teoreettisten fyysikkojen tavoitteena kvanttimekaniikan synnystä lähtien. Kvanttimekaniikan soveltaminen korkean energian ilmiöihin yleisen suhteellisuusteorian viitekehyksessä johtaa aika-avaruuden koordinaattien operatiiviseen ei-kommutoivuuteen. Ei-kommutoivia aika-avaruuden geometrioita tavataan myös avointen säikeiden säieteorioiden tietyillä matalan energian rajoilla. Ei-kommutoivan aika-avaruuden gravitaatioteoria voisi olla yhteensopiva kvanttimekaniikan kanssa ja se voisi mahdollistaa erittäin lyhyiden etäisyyksien ja korkeiden energioiden prosessien ei-lokaaliksi uskotun fysiikan kuvauksen, sekä tuottaa yleisen suhteellisuusteorian kanssa yhtenevän teorian pitkillä etäisyyksillä. Tässä työssä tarkastelen gravitaatiota Poincarén symmetrian mittakenttäteoriana ja pyrin yleistämään tämän näkemyksen ei-kommutoiviin aika-avaruuksiin. Ensin esittelen Poincarén symmetrian keskeisen roolin relativistisessa fysiikassa ja sen kuinka klassinen gravitaatioteoria johdetaan Poincarén symmetrian mittakenttäteoriana kommutoivassa aika-avaruudessa. Jatkan esittelemällä ei-kommutoivan aika-avaruuden ja kvanttikenttäteorian muotoilun ei-kommutoivassa aika-avaruudessa. Mittasymmetrioiden lokaalin luonteen vuoksi tarkastelen huolellisesti mittakenttäteorioiden muotoilua ei-kommutoivassa aika-avaruudessa. Erityistä huomiota kiinnitetään näiden teorioiden vääristyneeseen Poincarén symmetriaan, joka on ei-kommutoivan aika-avaruuden omaama uudentyyppinen kvanttisymmetria. Seuraavaksi tarkastelen ei-kommutoivan gravitaatioteorian muotoilun ongelmia ja niihin kirjallisuudessa esitettyjä ratkaisuehdotuksia. Selitän kuinka kaikissa tähänastisissa lähestymistavoissa epäonnistutaan muotoilla kovarianssi yleisten koordinaattimunnosten suhteen, joka on yleisen suhteellisuusteorian kulmakivi. Lopuksi tutkin mahdollisuutta yleistää vääristynyt Poincarén symmetria lokaaliksi mittasymmetriaksi --- gravitaation ei-kommutoivan mittakenttäteorian saavuttamisen toivossa. Osoitan, että tällaista yleistystä ei voida saavuttaa vääristämällä Poincarén symmetriaa kovariantilla twist-elementillä. Näin ollen ei-kommutoivan gravitaation ja vääristyneen Poincarén symmetrian tutkimuksessa tulee jatkossa keskittyä muihin lähestymistapoihin.
Resumo:
This masters thesis explores some of the most recent developments in noncommutative quantum field theory. This old theme, first suggested by Heisenberg in the late 1940s, has had a renaissance during the last decade due to the firmly held belief that space-time becomes noncommutative at small distances and also due to the discovery that string theory in a background field gives rise to noncommutative field theory as an effective low energy limit. This has led to interesting attempts to create a noncommutative standard model, a noncommutative minimal supersymmetric standard model, noncommutative gravity theories etc. This thesis reviews themes and problems like those of UV/IR mixing, charge quantization, how to deal with the non-commutative symmetries, how to solve the Seiberg-Witten map, its connection to fluid mechanics and the problem of constructing general coordinate transformations to obtain a theory of noncommutative gravity. An emphasis has been put on presenting both the group theoretical results and the string theoretical ones, so that a comparison of the two can be made.
Resumo:
The main purpose of the research was to illustrate chemistry matriculation examination questions as a summative assessment tool, and represent how the questions have evolved over the years. Summative assessment and its various test item classifications, Finnish goal-oriented curriculum model, and Bloom’s Revised Taxonomy of Cognitive Objectives formed the theoretical framework for the research. The research data consisted of 257 chemistry questions from 28 matriculation examinations between 1996 and 2009. The analysed test questions were formulated according to the national upper secondary school chemistry curricula 1994, and 2003. Qualitative approach and theory-driven content analysis method were employed in the research. Peer review was used to guarantee the reliability of the results. The research was guided by the following questions: (a) What kinds of test item formats are used in chemistry matriculation examinations? (b) How the fundamentals of chemistry are included in the chemistry matriculation examination questions? (c) What kinds of cognitive knowledge and skills do the chemistry matriculation examination questions require? The research indicates that summative assessment was used diversely in chemistry matriculation examinations. The tests included various test item formats, and their combinations. The majority of the test questions were constructed-response items that were either verbal, quantitative, or experimental questions, symbol questions, or combinations of the aforementioned. The studied chemistry matriculation examinations seldom included selected-response items that can be either multiple-choice, alternate choice, or matching items. The relative emphasis of the test item formats differed slightly depending on whether the test was a part of an extensive general studies battery of tests in sciences and humanities, or a subject-specific test. The classification framework developed in the research can be applied in chemistry and science education, and also in educational research. Chemistry matriculation examinations are based on the goal-oriented curriculum model, and cover relatively well the fundamentals of chemistry included in the national curriculum. Most of the test questions related to the symbolism of chemical equation, inorganic and organic reaction types and applications, the bonding and spatial structure in organic compounds, and stoichiometry problems. Only a few questions related to electrolysis, polymers, or buffer solutions. None of the test questions related to composites. There were not any significant differences in the emphasis between the tests formulated according to the national curriculum 1994 or 2003. Chemistry matriculation examinations are cognitively demanding. The research shows that the majority of the test questions require higher-order cognitive skills. Most of the questions required analysis of procedural knowledge. The questions that only required remembering or processing metacognitive knowledge, were not included in the research data. The required knowledge and skill level varied slightly between the test questions in the extensive general studies battery of tests in sciences and humanities, and subject-specific tests administered since 2006. The proportion of the Finnish chemistry matriculation examination questions requiring higher-order cognitive knowledge and skills is very large compared to what is discussed in the research literature.
Resumo:
NMR spectroscopy enables the study of biomolecules from peptides and carbohydrates to proteins at atomic resolution. The technique uniquely allows for structure determination of molecules in solution-state. It also gives insights into dynamics and intermolecular interactions important for determining biological function. Detailed molecular information is entangled in the nuclear spin states. The information can be extracted by pulse sequences designed to measure the desired molecular parameters. Advancement of pulse sequence methodology therefore plays a key role in the development of biomolecular NMR spectroscopy. A range of novel pulse sequences for solution-state NMR spectroscopy are presented in this thesis. The pulse sequences are described in relation to the molecular information they provide. The pulse sequence experiments represent several advances in NMR spectroscopy with particular emphasis on applications for proteins. Some of the novel methods are focusing on methyl-containing amino acids which are pivotal for structure determination. Methyl-specific assignment schemes are introduced for increasing the size range of 13C,15N labeled proteins amenable to structure determination without resolving to more elaborate labeling schemes. Furthermore, cost-effective means are presented for monitoring amide and methyl correlations simultaneously. Residual dipolar couplings can be applied for structure refinement as well as for studying dynamics. Accurate methods for measuring residual dipolar couplings in small proteins are devised along with special techniques applicable when proteins require high pH or high temperature solvent conditions. Finally, a new technique is demonstrated to diminish strong-coupling induced artifacts in HMBC, a routine experiment for establishing long-range correlations in unlabeled molecules. The presented experiments facilitate structural studies of biomolecules by NMR spectroscopy.
Resumo:
This thesis contains five experimental spectroscopic studies that probe the vibration-rotation energy level structure of acetylene and some of its isotopologues. The emphasis is on the development of laser spectroscopic methods for high-resolution molecular spectroscopy. Three of the experiments use cavity ringdown spectroscopy. One is a standard setup that employs a non-frequency stabilised continuous wave laser as a source. In the other two experiments, the same laser is actively frequency stabilised to the ringdown cavity. This development allows for increased repetition rate of the experimental signal and thus the spectroscopic sensitivity of the method is improved. These setups are applied to the recording of several vibration-rotation overtone bands of both H(12)C(12)CH and H(13)C(13)CH. An intra-cavity laser absorption spectroscopy setup that uses a commercial continuous wave ring laser and a Fourier transform interferometer is presented. The configuration of the laser is found to be sub-optimal for high-sensitivity work but the spectroscopic results are good and show the viability of this type of approach. Several ro-vibrational bands of carbon-13 substituted acetylenes are recorded and analysed. Compared with earlier work, the signal-to-noise ratio of a laser-induced dispersed infrared fluorescence experiment is enhanced by more than one order of magnitude by exploiting the geometric characteristics of the setup. The higher sensitivity of the spectrometer leads to the observation of two new symmetric vibrational states of H(12)C(12)CH. The precision of the spectroscopic parameters of some previously published symmetric states is also improved. An interesting collisional energy transfer process is observed for the excited vibrational states and this phenomenon is explained by a simple step-down model.
Resumo:
Pressurised hot water extraction (PHWE) exploits the unique temperature-dependent solvent properties of water minimising the use of harmful organic solvents. Water is environmentally friendly, cheap and easily available extraction medium. The effects of temperature, pressure and extraction time in PHWE have often been studied, but here the emphasis was on other parameters important for the extraction, most notably the dimensions of the extraction vessel and the stability and solubility of the analytes to be extracted. Non-linear data analysis and self-organising maps were employed in the data analysis to obtain correlations between the parameters studied, recoveries and relative errors. First, pressurised hot water extraction (PHWE) was combined on-line with liquid chromatography-gas chromatography (LC-GC), and the system was applied to the extraction and analysis of polycyclic aromatic hydrocarbons (PAHs) in sediment. The method is of superior sensitivity compared with the traditional methods, and only a small 10 mg sample was required for analysis. The commercial extraction vessels were replaced by laboratory-made stainless steel vessels because of some problems that arose. The performance of the laboratory-made vessels was comparable to that of the commercial ones. In an investigation of the effect of thermal desorption in PHWE, it was found that at lower temperatures (200ºC and 250ºC) the effect of thermal desorption is smaller than the effect of the solvating property of hot water. At 300ºC, however, thermal desorption is the main mechanism. The effect of the geometry of the extraction vessel on recoveries was studied with five specially constructed extraction vessels. In addition to the extraction vessel geometry, the sediment packing style and the direction of water flow through the vessel were investigated. The geometry of the vessel was found to have only minor effect on the recoveries, and the same was true of the sediment packing style and the direction of water flow through the vessel. These are good results because these parameters do not have to be carefully optimised before the start of extractions. Liquid-liquid extraction (LLE) and solid-phase extraction (SPE) were compared as trapping techniques for PHWE. LLE was more robust than SPE and it provided better recoveries and repeatabilities than did SPE. Problems related to blocking of the Tenax trap and unrepeatable trapping of the analytes were encountered in SPE. Thus, although LLE is more labour intensive, it can be recommended over SPE. The stabilities of the PAHs in aqueous solutions were measured using a batch-type reaction vessel. Degradation was observed at 300ºC even with the shortest heating time. Ketones and quinones and other oxidation products were observed. Although the conditions of the stability studies differed considerably from the extraction conditions in PHWE, the results indicate that the risk of analyte degradation must be taken into account in PHWE. The aqueous solubilities of acenaphthene, anthracene and pyrene were measured, first below and then above the melting point of the analytes. Measurements below the melting point were made to check that the equipment was working, and the results were compared with those obtained earlier. Good agreement was found between the measured and literature values. A new saturation cell was constructed for the solubility measurements above the melting point of the analytes because the flow-through saturation cell could not be used above the melting point. An exponential relationship was found between the solubilities measured for pyrene and anthracene and temperature.
Resumo:
Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.
Resumo:
In this thesis the use of the Bayesian approach to statistical inference in fisheries stock assessment is studied. The work was conducted in collaboration of the Finnish Game and Fisheries Research Institute by using the problem of monitoring and prediction of the juvenile salmon population in the River Tornionjoki as an example application. The River Tornionjoki is the largest salmon river flowing into the Baltic Sea. This thesis tackles the issues of model formulation and model checking as well as computational problems related to Bayesian modelling in the context of fisheries stock assessment. Each article of the thesis provides a novel method either for extracting information from data obtained via a particular type of sampling system or for integrating the information about the fish stock from multiple sources in terms of a population dynamics model. Mark-recapture and removal sampling schemes and a random catch sampling method are covered for the estimation of the population size. In addition, a method for estimating the stock composition of a salmon catch based on DNA samples is also presented. For most of the articles, Markov chain Monte Carlo (MCMC) simulation has been used as a tool to approximate the posterior distribution. Problems arising from the sampling method are also briefly discussed and potential solutions for these problems are proposed. Special emphasis in the discussion is given to the philosophical foundation of the Bayesian approach in the context of fisheries stock assessment. It is argued that the role of subjective prior knowledge needed in practically all parts of a Bayesian model should be recognized and consequently fully utilised in the process of model formulation.
Resumo:
The future use of genetically modified (GM) plants in food, feed and biomass production requires a careful consideration of possible risks related to the unintended spread of trangenes into new habitats. This may occur via introgression of the transgene to conventional genotypes, due to cross-pollination, and via the invasion of GM plants to new habitats. Assessment of possible environmental impacts of GM plants requires estimation of the level of gene flow from a GM population. Furthermore, management measures for reducing gene flow from GM populations are needed in order to prevent possible unwanted effects of transgenes on ecosystems. This work develops modeling tools for estimating gene flow from GM plant populations in boreal environments and for investigating the mechanisms of the gene flow process. To describe spatial dimensions of the gene flow, dispersal models are developed for the local and regional scale spread of pollen grains and seeds, with special emphasis on wind dispersal. This study provides tools for describing cross-pollination between GM and conventional populations and for estimating the levels of transgenic contamination of the conventional crops. For perennial populations, a modeling framework describing the dynamics of plants and genotypes is developed, in order to estimate the gene flow process over a sequence of years. The dispersal of airborne pollen and seeds cannot be easily controlled, and small amounts of these particles are likely to disperse over long distances. Wind dispersal processes are highly stochastic due to variation in atmospheric conditions, so that there may be considerable variation between individual dispersal patterns. This, in turn, is reflected to the large amount of variation in annual levels of cross-pollination between GM and conventional populations. Even though land-use practices have effects on the average levels of cross-pollination between GM and conventional fields, the level of transgenic contamination of a conventional crop remains highly stochastic. The demographic effects of a transgene have impacts on the establishment of trangenic plants amongst conventional genotypes of the same species. If the transgene gives a plant a considerable fitness advantage in comparison to conventional genotypes, the spread of transgenes to conventional population can be strongly increased. In such cases, dominance of the transgene considerably increases gene flow from GM to conventional populations, due to the enhanced fitness of heterozygous hybrids. The fitness of GM plants in conventional populations can be reduced by linking the selectively favoured primary transgene to a disfavoured mitigation transgene. Recombination between these transgenes is a major risk related to this technique, especially because it tends to take place amongst the conventional genotypes and thus promotes the establishment of invasive transgenic plants in conventional populations.
Resumo:
This thesis consists of three articles on passive vector fields in turbulence. The vector fields interact with a turbulent velocity field, which is described by the Kraichnan model. The effect of the Kraichnan model on the passive vectors is studied via an equation for the pair correlation function and its solutions. The first paper is concerned with the passive magnetohydrodynamic equations. Emphasis is placed on the so called "dynamo effect", which in the present context is understood as an unbounded growth of the pair correlation function. The exact analytical conditions for such growth are found in the cases of zero and infinite Prandtl numbers. The second paper contains an extensive study of a number of passive vector models. Emphasis is now on the properties of the (assumed) steady state, namely anomalous scaling, anisotropy and small and large scale behavior with different types of forcing or stirring. The third paper is in many ways a completion to the previous one in its study of the steady state existence problem. Conditions for the existence of the steady state are found in terms of the spatial roughness parameter of the turbulent velocity field.
Resumo:
Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.
Resumo:
This thesis examines posting of workers within the free movement of services in the European Union. The emphasis is on the case law of the European Court of Justice and in the role it has played in the liberalisation of the service sector in respect of posting of workers. The case law is examined from two different viewpoints: firstly, that of employment law and secondly, immigration law. The aim is to find out how active a role the Court has taken with regard these two fields of law and what are the implications of the Court’s judgments for the regulation on a national level. The first part of the thesis provides a general review of the Community law principles governing the freedom to provide services in the EU. The second part presents the Posted Workers’ Directive and the case law of the European Court of Justice before and after the enactment of the Directive from the viewpoint of employment law. Special attention is paid to a recent judgment in which the Court has taken a restrictive position with regard to a trade union’s right to take collective action against a service provider established in another Member State. The third part of the thesis concentrates, firstly, on the legal status of non-EU nationals lawfully resident in the EU. Secondly, it looks into the question of how the Court’s case law has affected the possibilities to use non-EU nationals as posted workers within the freedom to provide services. The final chapter includes a critical analysis of the Court’s case law on posted workers. The judgments of the European Court of Justice are the principal source of law for this thesis. In the primary legislation the focus is on Articles 49 EC and 50 EC that lay down the rules concerning the free movement of services. Within the secondary legislation, the present work principally concentrates on the Posted Workers’ Directive. It also examines proposals of the European Commission and directives that have been adopted in the field of immigration. The conclusions of the case study are twofold: while in the field of employment law, the European Court of Justice has based its judgments on a very literal interpretation of the Posted Workers’ Directive, in the field of immigration its conclusions have been much more innovative. In both fields of regulation the Court’s judgments have far-reaching implications for the rules concerning posting of workers leaving very little discretion for the Member States’ authorities.
The Mediated Immediacy : João Batista Libanio and the Question of Latin American Liberation Theology
Resumo:
This study is a systematic analysis of mediated immediacy in the production of the Brazilian professor of theology João Batista Libanio. He stresses both ethical mediation and the immediate character of the faith. Libanio has sought an answer to the problem of science and faith. He makes use of the neo-scholastic distinction between matter and form. According to St. Thomas Aquinas, God cannot be known as a scientific object, but it is possible to predicate a formal theological content of other subject matter with the help of revelation. This viewpoint was emphasized in neo-Thomism and supported by the liberation theologians. For them, the material starting point was social science. It becomes a theologizable or revealable (revelabile) reality. This social science has its roots in Latin American Marxism which was influenced by the school of Louis Althusser and considered Marxism a science of history . The synthesis of Thomism and Marxism is a challenge Libanio faced, especially in his Teologia da libertação from 1987. He emphasized the need for a genuinely spiritual and ethical discernment, and was particularly critical of the ethical implications of class struggle. Libanio s thinking has a strong hermeneutic flavor. It is more important to understand than to explain. He does not deny the need for social scientific data, but that they cannot be the exclusive starting point of theology. There are different readings of the world, both scientific and theological. A holistic understanding of the nature of religious experience is needed. Libanio follows the interpretation given by H. C. de Lima Vaz, according to whom the Hegelian dialectic is a rational circulation between the totality and its parts. He also recalls Oscar Cullmann s idea of God s Kingdom that is already and not yet . In other words, there is a continuous mediation of grace into the natural world. This dialectic is reflected in ethics. Faith must be verified in good works. Libanio uses the Thomist fides caritate formata principle and the modern orthopraxis thinking represented by Edward Schillebeeckx. One needs both the ortho of good faith and the praxis of the right action. The mediation of praxis is the mediation of human and divine love. Libanio s theology has strong roots in the Jesuit spirituality that places the emphasis on contemplation in action.