24 resultados para General Linear Methods
em Helda - Digital Repository of University of Helsinki
Resumo:
This thesis discusses the use of sub- and supercritical fluids as the medium in extraction and chromatography. Super- and subcritical extraction was used to separate essential oils from herbal plant Angelica archangelica. The effect of extraction parameters was studied and sensory analyses of the extracts were done by an expert panel. The results of the sensory analyses were compared to the analytically determined contents of the extracts. Sub- and supercritical fluid chromatography (SFC) was used to separate and purify high-value pharmaceuticals. Chiral SFC was used to separate the enantiomers of racemic mixtures of pharmaceutical compounds. Very low (cryogenic) temperatures were applied to substantially enhance the separation efficiency of chiral SFC. The thermodynamic aspects affecting the resolving ability of chiral stationary phases are briefly reviewed. The process production rate which is a key factor in industrial chromatography was optimized by empirical multivariate methods. General linear model was used to optimize the separation of omega-3 fatty acid ethyl esters from esterized fish oil by using reversed-phase SFC. Chiral separation of racemic mixtures of guaifenesin and ferulic acid dimer ethyl ester was optimized by using response surface method with three variables per time. It was found that by optimizing four variables (temperature, load, flowate and modifier content) the production rate of the chiral resolution of racemic guaifenesin by cryogenic SFC could be increased severalfold compared to published results of similar application. A novel pressure-compensated design of industrial high pressure chromatographic column was introduced, using the technology developed in building the deep-sea submersibles (Mir 1 and 2). A demonstration SFC plant was built and the immunosuppressant drug cyclosporine A was purified to meet the requirements of US Pharmacopoeia. A smaller semi-pilot size column with similar design was used for cryogenic chiral separation of aromatase inhibitor Finrozole for use in its development phase 2.
Resumo:
How toddlers with special needs adjust to the daycare setting A multiple case study of how the relationships with adults and children are built The aim in this study is to describe how toddlers with special needs adjust to daycare. The emotional well-being and involvement in daycare activities of toddlers are especially investigated in this study. The relationship and how it is built between an adult and a child, a child and a child is examined. The daycare is examined through the socio-cultural theory as a pedagogical institution, where the child adapts by participating in social and cultural activities with the others. The development of the child is the result of the experiences that are gained through the constant relationship between the child, the family and social context. By the attachment theory the inner self-regulation, that allows the child safely adapt to new situations, develops most in the relationship between the child under 3years of age and the attending adult. The relationships between toddlers in daycare are usually built by the coincidental encounters in play and daily activities. In these relationships, the toddler gets the information of themselves and the other children. The complexity of the rules in the setting that organize the social action is challenging for the children and they need constant support from the adults. The participants of the study were five toddlers with special needs. When applying to daycare they were less than three years old and they got the specialist statement for their special needs, and the reference for daycare. The children were observed by recording their attending in the daycare once in the 3-4 months from the first day in daycare. Approximately 15 hours of material that was analysed with the Transana-program. The qualitative material was analysed by first collecting a descriptive model that explains and theorises the phenomenon. By the summery of the narrative it is placed a hypothesis that is tested by quantitative methods using correlations and variance analyses and general linear modeling that is used to count the differences between repeated measures and connections between different variables. The results of the study are built theoretically for the consistent conception between the theory and the findings in research. The toddlers in the study were all dependent on the support given by the adults in all the situations in the daycare. They could not associate with the other children without the support of the adults and their involvement in activities was low. The engagement of an adult in interaction was necessary for the children’s involvement in activities, and the co-operation with the other children. The engagement of teachers was statistically significantly higher than the engagement of other professions.
Resumo:
The aim of this study was to explore the sociocultural value orientations of Finnish adolescents and their attitudes toward information society. In addition, this study explored the association between values and attitudes toward information society. I investigated whether values and attitudes follow social development and whether they can be divided into value categories such as traditional, modern and postmodern. This study falls into the category of youth research. The study uses a multimethodological approach and straddles the following disciplines: the science of education, religious education, sociology and social psychology. The theoretical context of the study is modernisation, understood as a two level process. The first level represents the transition from a religious-based traditional society to a modern industrial society. The second level of modernisation refers to the process of development established after the second world war, called postmodernisation, which is understood as the transition from an emphasis on economical imperatives to an emphasis on subjective well-being and the quality of life. Postmodernisation influences both social organisations and individuals´ values and worldviews. The target group of this survey-study comprised 408 16- to 19-year-old Finnish adolescent students from secondary school and vocational school. The data were gathered with a quantitative questionnaire during the second half of 2001. The results of the study can be generalised to the population of Finnish 16- to 19-year-olds. The data were analysed quantitatively using ANOVA and multivariate analyses such as cluster analysis, factor analysis and general linear modeling. Bayesian dependence modeling served to explore further how the values predict the attitudes toward information society. The results indicate that values are associated not only with attitudes toward information society, but with many other sociocultural indicator as well. Especially strong interpreting indicators included gender and identity or lifestyle questions. The results also indicate an association between values, attitudes and social development and a two-level modernisation process. Values formed traditional, modern and postmodern value systems. Keywords: values, attitudes, modernisation, information society, traditional, modern, postmodern
Resumo:
The ageing of the labour force and falling employment rates have forced policy makers in industrialized countries to find means of increasing the well-being of older workers and of lengthening their work careers. The main objective of this thesis was to study longitudinally how health, functional capacity, subjective well-being, and lifestyle change as people grow older, and what effect retirement has on these factors and on their relationships. The present study is a follow-up questionnaire study of Finnish municipal workers, conducted in 1981 to 1997 at the Finnish Institute of Occupational Health. In 1981, a postal questionnaire was sent to 7344 municipal workers in different parts of Finland. The respondents were born between 1923 and 1937. A total of 6257 persons responded to the first questionnaire. In the end, a total of 3817 persons had responded to all four (1981, 1985, 1992, 1997) questionnaires. (The response rate was 69% of the living participants). Cross-tabulations, comparison of means, logistic regression analyses and general linear models with repeated measures were used to derive the results. The transition from work life to retirement, and the following years as a pensioner were associated with many changes. Involvement in various activities increased during the transition stage but later decreased to the previous level. Physical exercise was an exception: it became increasingly popular over the years. Perceived health improved markedly from the working stage to the retirement transition stage, even though morbidity increased steadily during the follow-up. On the other hand, functional capacity decreased over the follow-up, especially among those who were occupationally active until the retirement stage. Subjective well-being remained stable during the follow-up period. There were, however, great differences based on the type of work, favouring those whose work had been mental in nature. The impact of activity level on maintaining well-being became greater during the follow-up, whereas the effect of physical functioning diminished. Good physical functioning and an active life-style contributed to staying on at work until normal retirement age. Also work-related factors, i.e. possibilities for development and influence at work, responsibility for others, meaningful work, and satisfaction with working time arrangements were positively related to continuing working. The transition from work to retirement had a positive impact on a person s health and functional capacity. The study results support the view that it should be possible to ease one s work pace during the last years of a work career. This might lower the threshold between work and retirement and convince people that there will still be time to enjoy retirement also a few years later.
Resumo:
Environmentally benign and economical methods for the preparation of industrially important hydroxy acids and diacids were developed. The carboxylic acids, used in polyesters, alkyd resins, and polyamides, were obtained by the oxidation of the corresponding alcohols with hydrogen peroxide or air catalyzed by sodium tungstate or supported noble metals. These oxidations were carried out using water as a solvent. The alcohols are also a useful alternative to the conventional reactants, hydroxyaldehydes and cycloalkanes. The oxidation of 2,2-disubstituted propane-1,3-diols with hydrogen peroxide catalyzed by sodium tungstate afforded 2,2-disubstituted 3-hydroxypropanoic acids and 1,1-disubstituted ethane-1,2-diols as products. A computational study of the Baeyer-Villiger rearrangement of the intermediate 2,2-disubstituted 3-hydroxypropanals gave in-depth data of the mechanism of the reaction. Linear primary diols having chain length of at least six carbons were easily oxidized with hydrogen peroxide to linear dicarboxylic acids catalyzed by sodium tungstate. The Pt/C catalyzed air oxidation of 2,2-disubstituted propane-1,3-diols and linear primary diols afforded the highest yield of the corresponding hydroxy acids, while the Pt, Bi/C catalyzed oxidation of the diols afforded the highest yield of the corresponding diacids. The mechanism of the promoted oxidation was best described by the ensemble effect, and by the formation of a complex of the hydroxy and the carboxy groups of the hydroxy acids with bismuth atoms. The Pt, Bi/C catalyzed air oxidation of 2-substituted 2-hydroxymethylpropane-1,3-diols gave 2-substituted malonic acids by the decarboxylation of the corresponding triacids. Activated carbon was the best support and bismuth the most efficient promoter in the air oxidation of 2,2-dialkylpropane-1,3-diols to diacids. In oxidations carried out in organic solvents barium sulfate could be a valuable alternative to activated carbon as a non-flammable support. In the Pt/C catalyzed air oxidation of 2,2-disubstituted propane-1,3-diols to 2,2-disubstituted 3-hydroxypropanoic acids the small size of the 2-substituents enhanced the rate of the oxidation. When the potential of platinum of the catalyst was not controlled, the highest yield of the diacids in the Pt, Bi/C catalyzed air oxidation of 2,2-dialkylpropane-1,3-diols was obtained in the regime of mass transfer. The most favorable pH of the reaction mixture of the promoted oxidation was 10. The reaction temperature of 40°C prevented the decarboxylation of the diacids.
Resumo:
The aim of this study was to evaluate and test methods which could improve local estimates of a general model fitted to a large area. In the first three studies, the intention was to divide the study area into sub-areas that were as homogeneous as possible according to the residuals of the general model, and in the fourth study, the localization was based on the local neighbourhood. According to spatial autocorrelation (SA), points closer together in space are more likely to be similar than those that are farther apart. Local indicators of SA (LISAs) test the similarity of data clusters. A LISA was calculated for every observation in the dataset, and together with the spatial position and residual of the global model, the data were segmented using two different methods: classification and regression trees (CART) and the multiresolution segmentation algorithm (MS) of the eCognition software. The general model was then re-fitted (localized) to the formed sub-areas. In kriging, the SA is modelled with a variogram, and the spatial correlation is a function of the distance (and direction) between the observation and the point of calculation. A general trend is corrected with the residual information of the neighbourhood, whose size is controlled by the number of the nearest neighbours. Nearness is measured as Euclidian distance. With all methods, the root mean square errors (RMSEs) were lower, but with the methods that segmented the study area, the deviance in single localized RMSEs was wide. Therefore, an element capable of controlling the division or localization should be included in the segmentation-localization process. Kriging, on the other hand, provided stable estimates when the number of neighbours was sufficient (over 30), thus offering the best potential for further studies. Even CART could be combined with kriging or non-parametric methods, such as most similar neighbours (MSN).
Resumo:
Positron emission tomography (PET) is a molecular imaging technique that utilises radiopharmaceuticals (radiotracers) labelled with a positron-emitting radionuclide, such as fluorine-18 (18F). Development of a new radiotracer requires an appropriate radiosynthesis method: the most common of which with 18F is nucleophilic substitution with [18F]fluoride ion. The success of the labelling reaction is dependent on various factors such as the reactivity of [18F]fluoride, the structure of the target compound in addition to the chosen solvent. The overall radiosynthesis procedure must be optimised in terms of radiochemical yield and quality of the final product. Therefore, both quantitative and qualitative radioanalytical methods are essential in developing radiosynthesis methods. Furthermore, biological properties of the tracer candidate need to be evaluated by various pre-clinical studies in animal models. In this work, the feasibility of various nucleophilic 18F-fluorination strategies were studied and a labelling method for a novel radiotracer, N-3-[18F]fluoropropyl-2beta-carbomethoxy-3beta-4-fluorophenyl)nortropane ([18F]beta-CFT-FP), was optimised. The effect of solvent was studied by labelling a series of model compounds, 4-(R1-methyl)benzyl R2-benzoates. 18F-Fluorination reactions were carried out both in polar aprotic and protic solvents (tertiary alcohols). Assessment of the 18F-fluorinated products was studied by mass spectrometry (MS) in addition to conventional radiochromatographic methods, using radiosynthesis of 4-[18F]fluoro-N-[2-[1-(2-methoxyphenyl)-1-piperazinyl]ethyl-N-2-pyridinyl-benzamide (p-[18F]MPPF) as a model reaction. Labelling of [18F]beta-CFT-FP was studied using two 18F-fluoroalkylation reagents, [18F]fluoropropyl bromide and [18F]fluoropropyl tosylate, as well as by direct 18F-fluorination of sulfonate ester precursor. Subsequently, the suitability of [18F]beta-CFT-FP for imaging dopamine transporter (DAT) was evaluated by determining its biodistribution in rats. The results showed that protic solvents can be useful co-solvents in aliphatic 18F-fluorinations, especially in the labelling of sulfonate esters. Aromatic 18F-fluorination was not promoted in tert-alcohols. Sensitivity of the ion trap MS was sufficient for the qualitative analysis of the 18F-labelled products; p-[18F]MPPF was identified from the isolated product fraction with a mass-to-charge (m/z) ratio of 435 (i.e. protonated molecule [M+H]+). [18F]beta-CFT-FP was produced most efficiently via [18F]fluoropropyl tosylate, leading to sufficient radiochemical yield and specific radioactivity for PET studies. The ex vivo studies in rats showed fast kinetics as well as the specific uptake of [18F]beta-CFT-FP to the DAT rich brain regions. Thus, it was concluded that [18F]beta-CFT-FP has potential as a radiotracer for imaging DAT by PET.
Resumo:
This dissertation is a theoretical study of finite-state based grammars used in natural language processing. The study is concerned with certain varieties of finite-state intersection grammars (FSIG) whose parsers define regular relations between surface strings and annotated surface strings. The study focuses on the following three aspects of FSIGs: (i) Computational complexity of grammars under limiting parameters In the study, the computational complexity in practical natural language processing is approached through performance-motivated parameters on structural complexity. Each parameter splits some grammars in the Chomsky hierarchy into an infinite set of subset approximations. When the approximations are regular, they seem to fall into the logarithmic-time hierarchyand the dot-depth hierarchy of star-free regular languages. This theoretical result is important and possibly relevant to grammar induction. (ii) Linguistically applicable structural representations Related to the linguistically applicable representations of syntactic entities, the study contains new bracketing schemes that cope with dependency links, left- and right branching, crossing dependencies and spurious ambiguity. New grammar representations that resemble the Chomsky-Schützenberger representation of context-free languages are presented in the study, and they include, in particular, representations for mildly context-sensitive non-projective dependency grammars whose performance-motivated approximations are linear time parseable. (iii) Compilation and simplification of linguistic constraints Efficient compilation methods for certain regular operations such as generalized restriction are presented. These include an elegant algorithm that has already been adopted as the approach in a proprietary finite-state tool. In addition to the compilation methods, an approach to on-the-fly simplifications of finite-state representations for parse forests is sketched. These findings are tightly coupled with each other under the theme of locality. I argue that the findings help us to develop better, linguistically oriented formalisms for finite-state parsing and to develop more efficient parsers for natural language processing. Avainsanat: syntactic parsing, finite-state automata, dependency grammar, first-order logic, linguistic performance, star-free regular approximations, mildly context-sensitive grammars
Resumo:
In this dissertation, I present an overall methodological framework for studying linguistic alternations, focusing specifically on lexical variation in denoting a single meaning, that is, synonymy. As the practical example, I employ the synonymous set of the four most common Finnish verbs denoting THINK, namely ajatella, miettiä, pohtia and harkita ‘think, reflect, ponder, consider’. As a continuation to previous work, I describe in considerable detail the extension of statistical methods from dichotomous linguistic settings (e.g., Gries 2003; Bresnan et al. 2007) to polytomous ones, that is, concerning more than two possible alternative outcomes. The applied statistical methods are arranged into a succession of stages with increasing complexity, proceeding from univariate via bivariate to multivariate techniques in the end. As the central multivariate method, I argue for the use of polytomous logistic regression and demonstrate its practical implementation to the studied phenomenon, thus extending the work by Bresnan et al. (2007), who applied simple (binary) logistic regression to a dichotomous structural alternation in English. The results of the various statistical analyses confirm that a wide range of contextual features across different categories are indeed associated with the use and selection of the selected think lexemes; however, a substantial part of these features are not exemplified in current Finnish lexicographical descriptions. The multivariate analysis results indicate that the semantic classifications of syntactic argument types are on the average the most distinctive feature category, followed by overall semantic characterizations of the verb chains, and then syntactic argument types alone, with morphological features pertaining to the verb chain and extra-linguistic features relegated to the last position. In terms of overall performance of the multivariate analysis and modeling, the prediction accuracy seems to reach a ceiling at a Recall rate of roughly two-thirds of the sentences in the research corpus. The analysis of these results suggests a limit to what can be explained and determined within the immediate sentential context and applying the conventional descriptive and analytical apparatus based on currently available linguistic theories and models. The results also support Bresnan’s (2007) and others’ (e.g., Bod et al. 2003) probabilistic view of the relationship between linguistic usage and the underlying linguistic system, in which only a minority of linguistic choices are categorical, given the known context – represented as a feature cluster – that can be analytically grasped and identified. Instead, most contexts exhibit degrees of variation as to their outcomes, resulting in proportionate choices over longer stretches of usage in texts or speech.
Resumo:
This thesis explores melodic and harmonic features of heavy metal, and while doing so, explores various methods of music analysis; their applicability and limitations regarding the study of heavy metal music. The study is built on three general hypotheses according to which 1) acoustic characteristics play a significant role for chord constructing in heavy metal, 2) heavy metal has strong ties and similarities with other Western musical styles, and 3) theories and analytical methods of Western art music may be applied to heavy metal. It seems evident that in heavy metal some chord structures appear far more frequently than others. It is suggested here that the fundamental reason for this is the use of guitar distortion effect. Subsequently, theories as to how and under what principles heavy metal is constructed need to be put under discussion; analytical models regarding the classification of consonance and dissonance and chord categorization are here revised to meet the common practices of this music. It is evident that heavy metal is not an isolated style of music; it is seen here as a cultural fusion of various musical styles. Moreover, it is suggested that the theoretical background to the construction of Western music and its analysis can offer invaluable insights to heavy metal. However, the analytical methods need to be reformed to some extent to meet the characteristics of the music. This reformation includes an accommodation of linear and functional theories that has been found rather rarely in music theory and musicology.
Resumo:
This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.
Resumo:
The Minimum Description Length (MDL) principle is a general, well-founded theoretical formalization of statistical modeling. The most important notion of MDL is the stochastic complexity, which can be interpreted as the shortest description length of a given sample of data relative to a model class. The exact definition of the stochastic complexity has gone through several evolutionary steps. The latest instantation is based on the so-called Normalized Maximum Likelihood (NML) distribution which has been shown to possess several important theoretical properties. However, the applications of this modern version of the MDL have been quite rare because of computational complexity problems, i.e., for discrete data, the definition of NML involves an exponential sum, and in the case of continuous data, a multi-dimensional integral usually infeasible to evaluate or even approximate accurately. In this doctoral dissertation, we present mathematical techniques for computing NML efficiently for some model families involving discrete data. We also show how these techniques can be used to apply MDL in two practical applications: histogram density estimation and clustering of multi-dimensional data.
Resumo:
Matrix decompositions, where a given matrix is represented as a product of two other matrices, are regularly used in data mining. Most matrix decompositions have their roots in linear algebra, but the needs of data mining are not always those of linear algebra. In data mining one needs to have results that are interpretable -- and what is considered interpretable in data mining can be very different to what is considered interpretable in linear algebra. --- The purpose of this thesis is to study matrix decompositions that directly address the issue of interpretability. An example is a decomposition of binary matrices where the factor matrices are assumed to be binary and the matrix multiplication is Boolean. The restriction to binary factor matrices increases interpretability -- factor matrices are of the same type as the original matrix -- and allows the use of Boolean matrix multiplication, which is often more intuitive than normal matrix multiplication with binary matrices. Also several other decomposition methods are described, and the computational complexity of computing them is studied together with the hardness of approximating the related optimization problems. Based on these studies, algorithms for constructing the decompositions are proposed. Constructing the decompositions turns out to be computationally hard, and the proposed algorithms are mostly based on various heuristics. Nevertheless, the algorithms are shown to be capable of finding good results in empirical experiments conducted with both synthetic and real-world data.
Resumo:
The metabolism of an organism consists of a network of biochemical reactions that transform small molecules, or metabolites, into others in order to produce energy and building blocks for essential macromolecules. The goal of metabolic flux analysis is to uncover the rates, or the fluxes, of those biochemical reactions. In a steady state, the sum of the fluxes that produce an internal metabolite is equal to the sum of the fluxes that consume the same molecule. Thus the steady state imposes linear balance constraints to the fluxes. In general, the balance constraints imposed by the steady state are not sufficient to uncover all the fluxes of a metabolic network. The fluxes through cycles and alternative pathways between the same source and target metabolites remain unknown. More information about the fluxes can be obtained from isotopic labelling experiments, where a cell population is fed with labelled nutrients, such as glucose that contains 13C atoms. Labels are then transferred by biochemical reactions to other metabolites. The relative abundances of different labelling patterns in internal metabolites depend on the fluxes of pathways producing them. Thus, the relative abundances of different labelling patterns contain information about the fluxes that cannot be uncovered from the balance constraints derived from the steady state. The field of research that estimates the fluxes utilizing the measured constraints to the relative abundances of different labelling patterns induced by 13C labelled nutrients is called 13C metabolic flux analysis. There exist two approaches of 13C metabolic flux analysis. In the optimization approach, a non-linear optimization task, where candidate fluxes are iteratively generated until they fit to the measured abundances of different labelling patterns, is constructed. In the direct approach, linear balance constraints given by the steady state are augmented with linear constraints derived from the abundances of different labelling patterns of metabolites. Thus, mathematically involved non-linear optimization methods that can get stuck to the local optima can be avoided. On the other hand, the direct approach may require more measurement data than the optimization approach to obtain the same flux information. Furthermore, the optimization framework can easily be applied regardless of the labelling measurement technology and with all network topologies. In this thesis we present a formal computational framework for direct 13C metabolic flux analysis. The aim of our study is to construct as many linear constraints to the fluxes from the 13C labelling measurements using only computational methods that avoid non-linear techniques and are independent from the type of measurement data, the labelling of external nutrients and the topology of the metabolic network. The presented framework is the first representative of the direct approach for 13C metabolic flux analysis that is free from restricting assumptions made about these parameters.In our framework, measurement data is first propagated from the measured metabolites to other metabolites. The propagation is facilitated by the flow analysis of metabolite fragments in the network. Then new linear constraints to the fluxes are derived from the propagated data by applying the techniques of linear algebra.Based on the results of the fragment flow analysis, we also present an experiment planning method that selects sets of metabolites whose relative abundances of different labelling patterns are most useful for 13C metabolic flux analysis. Furthermore, we give computational tools to process raw 13C labelling data produced by tandem mass spectrometry to a form suitable for 13C metabolic flux analysis.