120 resultados para STIFFLY-STABLE METHODS
em Helda - Digital Repository of University of Helsinki
Resumo:
The driving force behind this study has been the need to develop and apply methods for investigating the hydrogeochemical processes of significance to water management and artificial groundwater recharge. Isotope partitioning of elements in the course of physicochemical processes produces isotopic variations to their natural reservoirs. Tracer property of the stable isotope abundances of oxygen, hydrogen and carbon has been applied to investigate hydrogeological processes in Finland. The work described here has initiated the use of stable isotope methods to achieve a better understanding of these processes in the shallow glacigenic formations of Finland. In addition, the regional precipitation and groundwater records will supplement the data of global precipitation, but as importantly, provide primary background data for hydrological studies. The isotopic composition of oxygen and hydrogen in Finnish groundwaters and atmospheric precipitation was determined in water samples collected during 1995 2005. Prior to this study, no detailed records existed on the spatial or annual variability of the isotopic composition of precipitation or groundwaters in Finland. Groundwaters and precipitation in Finland display a distinct spatial distribution of the isotopic ratios of oxygen and hydrogen. The depletion of the heavier isotopes as a function of increasing latitude is closely related to the local mean surface temperature. No significant differences were observed between the mean annual isotope ratios of oxygen and hydrogen in precipitation and those in local groundwaters. These results suggest that the link between the spatial variability in the isotopic composition of precipitation and local temperature is preserved in groundwaters. Artificial groundwater recharge to glaciogenic sedimentary formations offers many possibilities to apply the isotopic ratios of oxygen, hydrogen and carbon as natural isotopic tracers. In this study the systematics of dissolved carbon have been investigated in two geochemically different glacigenic groundwater formations: a typical esker aquifer at Tuusula, in southern Finland and a carbonate-bearing aquifer with a complex internal structure at Virttaankangas, in southwest Finland. Reducing the concentration of dissolved organic carbon (DOC) in water is a primary challenge in the process of artificial groundwater recharge. The carbon isotope method was used to as a tool to trace the role of redox processes in the decomposition of DOC. At the Tuusula site, artificial recharge leads to a significant decrease in the organic matter content of the infiltrated water. In total, 81% of the initial DOC present in the infiltrated water was removed in three successive stages of subsurface processes. Three distinct processes in the reduction of the DOC content were traced: The decomposition of dissolved organic carbon in the first stage of subsurface flow appeared to be the most significant part in DOC removal, whereas further decrease in DOC has been attributed to adsorption and finally to dilution with local groundwater. Here, isotope methods were used for the first time to quantify the processes of DOC removal in an artificial groundwater recharge. Groundwaters in the Virttaankangas aquifer are characterized by high pH values exceeding 9, which are exceptional for shallow aquifers on glaciated crystalline bedrock. The Virttaankangas sediments were discovered to contain trace amounts of fine grained, dispersed calcite, which has a high tendency to increase the pH of local groundwaters. Understanding the origin of the unusual geochemistry of the Virttaankangas groundwaters is an important issue for constraining the operation of the future artificial groundwater plant. The isotope ratios of oxygen and carbon in sedimentary carbonate minerals have been successfully applied to constrain the origin of the dispersed calcite in the Virttaankangas sediments. The isotopic and chemical characteristics of the groundwater in the distinct units of aquifer were observed to vary depending on the aquifer mineralogy, groundwater residence time and the openness of the system to soil CO2. The high pH values of > 9 have been related to dissolution of calcite into groundwater under closed or nearly closed system conditions relative to soil CO2, at a low partial pressure of CO2.
Resumo:
The aim of this study was to evaluate and test methods which could improve local estimates of a general model fitted to a large area. In the first three studies, the intention was to divide the study area into sub-areas that were as homogeneous as possible according to the residuals of the general model, and in the fourth study, the localization was based on the local neighbourhood. According to spatial autocorrelation (SA), points closer together in space are more likely to be similar than those that are farther apart. Local indicators of SA (LISAs) test the similarity of data clusters. A LISA was calculated for every observation in the dataset, and together with the spatial position and residual of the global model, the data were segmented using two different methods: classification and regression trees (CART) and the multiresolution segmentation algorithm (MS) of the eCognition software. The general model was then re-fitted (localized) to the formed sub-areas. In kriging, the SA is modelled with a variogram, and the spatial correlation is a function of the distance (and direction) between the observation and the point of calculation. A general trend is corrected with the residual information of the neighbourhood, whose size is controlled by the number of the nearest neighbours. Nearness is measured as Euclidian distance. With all methods, the root mean square errors (RMSEs) were lower, but with the methods that segmented the study area, the deviance in single localized RMSEs was wide. Therefore, an element capable of controlling the division or localization should be included in the segmentation-localization process. Kriging, on the other hand, provided stable estimates when the number of neighbours was sufficient (over 30), thus offering the best potential for further studies. Even CART could be combined with kriging or non-parametric methods, such as most similar neighbours (MSN).
Resumo:
The feasibility of different modern analytical techniques for the mass spectrometric detection of anabolic androgenic steroids (AAS) in human urine was examined in order to enhance the prevalent analytics and to find reasonable strategies for effective sports drug testing. A comparative study of the sensitivity and specificity between gas chromatography (GC) combined with low (LRMS) and high resolution mass spectrometry (HRMS) in screening of AAS was carried out with four metabolites of methandienone. Measurements were done in selected ion monitoring mode with HRMS using a mass resolution of 5000. With HRMS the detection limits were considerably lower than with LRMS, enabling detection of steroids at low 0.2-0.5 ng/ml levels. However, also with HRMS, the biological background hampered the detection of some steroids. The applicability of liquid-phase microextraction (LPME) was studied with metabolites of fluoxymesterone, 4-chlorodehydromethyltestosterone, stanozolol and danazol. Factors affecting the extraction process were studied and a novel LPME method with in-fiber silylation was developed and validated for GC/MS analysis of the danazol metabolite. The method allowed precise, selective and sensitive analysis of the metabolite and enabled simultaneous filtration, extraction, enrichment and derivatization of the analyte from urine without any other steps in sample preparation. Liquid chromatographic/tandem mass spectrometric (LC/MS/MS) methods utilizing electrospray ionization (ESI), atmospheric pressure chemical ionization (APCI) and atmospheric pressure photoionization (APPI) were developed and applied for detection of oxandrolone and metabolites of stanozolol and 4-chlorodehydromethyltestosterone in urine. All methods exhibited high sensitivity and specificity. ESI showed, however, the best applicability, and a LC/ESI-MS/MS method for routine screening of nine 17-alkyl-substituted AAS was thus developed enabling fast and precise measurement of all analytes with detection limits below 2 ng/ml. The potential of chemometrics to resolve complex GC/MS data was demonstrated with samples prepared for AAS screening. Acquired full scan spectral data (m/z 40-700) were processed by the OSCAR algorithm (Optimization by Stepwise Constraints of Alternating Regression). The deconvolution process was able to dig out from a GC/MS run more than the double number of components as compared with the number of visible chromatographic peaks. Severely overlapping components, as well as components hidden in the chromatographic background could be isolated successfully. All studied techniques proved to be useful analytical tools to improve detection of AAS in urine. Superiority of different procedures is, however, compound-dependent and different techniques complement each other.
Resumo:
Natural products constitute an important source of new drugs. The bioavailability of the drugs depends on their absorption, distribution, metabolism and elimination. To achieve good bioavailability, the drug must be soluble in water, stable in the gastrointestinal tract and palatable. Binding proteins may improve the solubility of drug compounds, masking unwanted properties, such as bad taste, bitterness or toxicity, transporting or protecting these compounds during processing and storage. The focus of this thesis was to study the interactions, including ligand binding and the effect of pH and temperature, of bovine and reindeer β-lactoglobulin (βLG) with such compounds as retinoids, phenolic compounds as well as with compounds from plant extracts, and to investigate the transport properties of the βLG-ligand complex. To examine the binding interactions of different ligands to βLG, new methods were developed. The fluorescence binding method for the evaluation of ligand binding to βLG was miniaturized from a quartz cell to a 96-well plate. A method of ultrafiltration sampling combined with high-performance liquid chromatography was developed to assess the binding of compounds from extracts. The interactions of phenolic compounds or retinoids and βLG were investigated using the 96-well plate method. The majority of flavones, flavonols, flavanones and isoflavones and all of the retinoids included were shown to bind to bovine and reindeer βLG. Phenolic compounds, contrary to retinol, were not released at acidic pH. Those results suggest that βLG may have more binding sites, probably also on the surface of βLG. An extract from Camellia sinensis (L.) O. Kunze (black tea), Urtica dioica L. (nettle) and Piper nigrum (black pepper) were used to evaluate whether βLG could bind compounds from plant extracts. Piperine from P. nigrum was found to bind tightly and rutin from U. dioica weakly to βLG. No components from C. sinensis bound to βLG in our experiment. The uptake and membrane permeation of bovine and reindeer βLG, free and bound with retinol, palmitic acid and cholesterol, were investigated using Caco-2 cell monolayers. Both bovine and reindeer βLG were able to cross the Caco-2 cell membrane. Free and βLG-bound retinol and palmitic acid were transported equally, whereas cholesterol could not cross the Caco-2 cell monolayer free or bound to βLG. Our results showed that βLG can bind different natural product compounds, but cannot enhance transport of retinol, palmitic acid or cholesterol through Caco-2 cells. Despite this, βLG, as a water-soluble binding protein, may improve the solubility of natural compounds, possibly protecting them from early degradation and transporting some of them through the stomach. Furthermore, it may decrease their bad or bitter taste during oral administration of drugs or in food preparations. βLG can also enhance or decrease the health benefits of herbal teas and food preparations by binding compounds from extracts.
Resumo:
Miniaturized analytical devices, such as heated nebulizer (HN) microchips studied in this work, are of increasing interest owing to benefits like faster operation, better performance, and lower cost relative to conventional systems. HN microchips are microfabricated devices that vaporize liquid and mix it with gas. They are used with low liquid flow rates, typically a few µL/min, and have previously been utilized as ion sources for mass spectrometry (MS). Conventional ion sources are seldom feasible at such low flow rates. In this work HN chips were developed further and new applications were introduced. First, a new method for thermal and fluidic characterization of the HN microchips was developed and used to study the chips. Thermal behavior of the chips was also studied by temperature measurements and infrared imaging. An HN chip was applied to the analysis of crude oil – an extremely complex sample – by microchip atmospheric pressure photoionization (APPI) high resolution mass spectrometry. With the chip, the sample flow rate could be reduced significantly without loss of performance and with greatly reduced contamination of the MS instrument. Thanks to its suitability to high temperature, microchip APPI provided efficient vaporization of nonvolatile compounds in crude oil. The first microchip version of sonic spray ionization (SSI) was presented. Ionization was achieved by applying only high (sonic) speed nebulizer gas to an HN microchip. SSI significantly broadens the range of analytes ionizable with the HN chips, from small stable molecules to labile biomolecules. The analytical performance of the microchip SSI source was confirmed to be acceptable. The HN microchips were also used to connect gas chromatography (GC) and capillary liquid chromatography (LC) to MS, using APPI for ionization. Microchip APPI allows efficient ionization of both polar and nonpolar compounds whereas with the most popular electrospray ionization (ESI) only polar and ionic molecules are ionized efficiently. The combination of GC with MS showed that, with HN microchips, GCs can easily be used with MS instruments designed for LC-MS. The presented analytical methods showed good performance. The first integrated LC–HN microchip was developed and presented. In a single microdevice, there were structures for a packed LC column and a heated nebulizer. Nonpolar and polar analytes were efficiently ionized by APPI. Ionization of nonpolar and polar analytes is not possible with previously presented chips for LC–MS since they rely on ESI. Preliminary quantitative performance of the new chip was evaluated and the chip was also demonstrated with optical detection. A new ambient ionization technique for mass spectrometry, desorption atmospheric pressure photoionization (DAPPI), was presented. The DAPPI technique is based on an HN microchip providing desorption of analytes from a surface. Photons from a photoionization lamp ionize the analytes via gas-phase chemical reactions, and the ions are directed into an MS. Rapid analysis of pharmaceuticals from tablets was successfully demonstrated as an application of DAPPI.
Resumo:
The number of drug substances in formulation development in the pharmaceutical industry is increasing. Some of these are amorphous drugs and have glass transition below ambient temperature, and thus they are usually difficult to formulate and handle. One reason for this is the reduced viscosity, related to the stickiness of the drug, that makes them complicated to handle in unit operations. Thus, the aim in this thesis was to develop a new processing method for a sticky amorphous model material. Furthermore, model materials were characterised before and after formulation, using several characterisation methods, to understand more precisely the prerequisites for physical stability of amorphous state against crystallisation. The model materials used were monoclinic paracetamol and citric acid anhydrate. Amorphous materials were prepared by melt quenching or by ethanol evaporation methods. The melt blends were found to have slightly higher viscosity than the ethanol evaporated materials. However, melt produced materials crystallised more easily upon consecutive shearing than ethanol evaporated materials. The only material that did not crystallise during shearing was a 50/50 (w/w, %) blend regardless of the preparation method and it was physically stable at least two years in dry conditions. Shearing at varying temperatures was established to measure the physical stability of amorphous materials in processing and storage conditions. The actual physical stability of the blends was better than the pure amorphous materials at ambient temperature. Molecular mobility was not related to the physical stability of the amorphous blends, observed as crystallisation. Molecular mobility of the 50/50 blend derived from a spectral linewidth as a function of temperature using solid state NMR correlated better with the molecular mobility derived from a rheometer than that of differential scanning calorimetry data. Based on the results obtained, the effect of molecular interactions, thermodynamic driving force and miscibility of the blends are discussed as the key factors to stabilise the blends. The stickiness was found to be affected glass transition and viscosity. Ultrasound extrusion and cutting were successfully tested to increase the processability of sticky material. Furthermore, it was found to be possible to process the physically stable 50/50 blend in a supercooled liquid state instead of a glassy state. The method was not found to accelerate the crystallisation. This may open up new possibilities to process amorphous materials that are otherwise impossible to manufacture into solid dosage forms.
Resumo:
In this dissertation, I present an overall methodological framework for studying linguistic alternations, focusing specifically on lexical variation in denoting a single meaning, that is, synonymy. As the practical example, I employ the synonymous set of the four most common Finnish verbs denoting THINK, namely ajatella, miettiä, pohtia and harkita ‘think, reflect, ponder, consider’. As a continuation to previous work, I describe in considerable detail the extension of statistical methods from dichotomous linguistic settings (e.g., Gries 2003; Bresnan et al. 2007) to polytomous ones, that is, concerning more than two possible alternative outcomes. The applied statistical methods are arranged into a succession of stages with increasing complexity, proceeding from univariate via bivariate to multivariate techniques in the end. As the central multivariate method, I argue for the use of polytomous logistic regression and demonstrate its practical implementation to the studied phenomenon, thus extending the work by Bresnan et al. (2007), who applied simple (binary) logistic regression to a dichotomous structural alternation in English. The results of the various statistical analyses confirm that a wide range of contextual features across different categories are indeed associated with the use and selection of the selected think lexemes; however, a substantial part of these features are not exemplified in current Finnish lexicographical descriptions. The multivariate analysis results indicate that the semantic classifications of syntactic argument types are on the average the most distinctive feature category, followed by overall semantic characterizations of the verb chains, and then syntactic argument types alone, with morphological features pertaining to the verb chain and extra-linguistic features relegated to the last position. In terms of overall performance of the multivariate analysis and modeling, the prediction accuracy seems to reach a ceiling at a Recall rate of roughly two-thirds of the sentences in the research corpus. The analysis of these results suggests a limit to what can be explained and determined within the immediate sentential context and applying the conventional descriptive and analytical apparatus based on currently available linguistic theories and models. The results also support Bresnan’s (2007) and others’ (e.g., Bod et al. 2003) probabilistic view of the relationship between linguistic usage and the underlying linguistic system, in which only a minority of linguistic choices are categorical, given the known context – represented as a feature cluster – that can be analytically grasped and identified. Instead, most contexts exhibit degrees of variation as to their outcomes, resulting in proportionate choices over longer stretches of usage in texts or speech.
Resumo:
The emperor of our fatherland The changing national identity of the elite and the construction of the Finnish fatherland at the beginning of the autonomy This study addresses the question of changing national identity of the elite at the beginning of the autonomy (1808 1814) in Finland. Russia had conquered Finland from Sweden, but Finland was not incorporated into the Russian Empire. Instead, it was governed as separately administered area, and Finland retained its own (laws of the realm of Sweden) laws. The inclusion in the Russian Empire compelled the elite of Finland to deliberate their national identity; they had to determine whether they remained Swedes or became Finns or Russians. The elite chose to become Finns, which may seem obvious from the nowadays perspective, but it cannot be taken for granted that the Swedish speaking and noble elite converted their local Finnish identity into a new national identity. The basis of this study is constructive in a sense that identity is not seen as stable and constant. Theoretical background lies on Stuart Hall s writings on national identity, which offer good practical methods to study national identity. According to Hall identity is based mainly on difference , difference to others . In practice this means how elite began to define themselves in contrast to Swedes and Russians. The Finnish national identity was constructed in contrast to Swedes due to the political reasons. In order to avoid Russians suspicions Finns had to diverge from Sweden. Sweden had also gone trough coup d état, which was disliked by the elite of Finland. However, the attitudes of the elite towards Sweden remained somewhat ambiguous. Even if it was politically and rationally thinking wisest to draw away from Sweden, emotionally it was difficult. Russia, on the other hand, had been for centuries the archenemy of the Finns as well as all the Swedes. The fear of the Russians was mainly imaginary. Russians were seen as cruel barbarians who hated and resented Finns. The Finnish national identity was constructed above all in contrast to the Russians, for the difference to Russia was seen as a precondition for the existence of Finland. Respectively, the new position of Finland also required approaching towards Russia, which was in its nature very pragmatic. The elite contrived to get rid off its prejudice against Russians on intellectual level, but not on emotional level. At the beginning of the autonomy the primary loyalty of the elite was directed into the Finnish fatherland and its habitants. This was a radical ideological change, because traditionally the loyalty of the elite had focused on monarch and monarch s realm. However, the role of Alexander I was crucial. According to the elite the emperor had granted them a new fatherland. The former native country (Finland) was seen as a new fatherland instead of Sweden. The loyalty of the elite to the emperor generated from the reciprocal gratitude; Alexander I had treated their native country so mercifully. The elite felt strong personal responsibility for Finland s existence. The elite believed that the future of Finland rested on their shoulders. Alexander I had given them fatherland, but it was in the hands of the elite to construct the Finnish state and national spirit. The study of the Finnish national identity brings forth also that the national identity was constructed by emphasizing Finns civic rights. The civic rights were essential part of the construction of the Finnish national identity, for the difference between Finns and Russians was based on Finns own laws and privileges, which the emperor of the Russia had ensured.
Resumo:
Reciprocal development of the object and subject of learning. The renewal of the learning practices of front-line communities in a telecommunications company as part of the techno-economical paradigm change. Current changes in production have been seen as an indication of a shift from the techno-economical paradigm of a mass-production era to a new paradigm of the information and communication technological era. The rise of knowledge management in the late 1990s can be seen as one aspect of this paradigm shift, as knowledge creation and customer responsiveness were recognized as the prime factors in business competition. However, paradoxical conceptions concerning learning and agency have been presented in the discussion of knowledge management. One prevalent notion in the literature is that learning is based on individuals’ voluntary actions and this has now become incompatible with the growing interest in knowledge-management systems. Furthermore, commonly held view of learning as a general process that is independent of the object of learning contradicts the observation that the current need for new knowledge and new competences are caused by ongoing techno-economic changes. Even though the current view acknowledges that individuals and communities have key roles in knowledge creation, this conception defies the idea of the individuals’ and communities’ agency in developing the practices through which they learn. This research therefore presents a new theoretical interpretation of learning and agency based on Cultural-Historical Activity Theory. This approach overcomes the paradoxes in knowledge-management theory and offers means for understanding and analyzing changes in the ways of learning within work communities. This research is also an evaluation of the Competence-Laboratory method which was developed as part of the study as a special application of Developmental Work Research methodology. The research data comprises the videotaped competence-laboratory processes of four front-line work communities in a telecommunications company. The findings reported in the five articles included in this thesis are based on the analyses of this data. The new theoretical interpretation offered here is based on the assessment that the findings reported in the articles represent one of the front lines of the ongoing historical transformation of work-related learning since the research site represents one of the key industries of the new “knowledge society”. The research can be characterized as elaboration of a hypothesis concerning the development of work related learning. According to the new theoretical interpretation, the object of activity is also the object of distributed learning in work communities. The historical socialization of production has increased the number of actors involved in an activity, which has also increased the number of mutual interdependencies as well as the need for communication. Learning practices and organizational systems of learning are historically developed forms of distributed learning mediated by specific forms of division of labor, specific tools, and specific rules. However, the learning practices of the mass production era become increasingly inadequate to accommodate the conditions in the new economy. This was manifested in the front-line work communities in the research site as an aggravating contradiction between the new objects of learning and the prevailing learning practices. The constituent element of this new theoretical interpretation is the idea of a work community’s learning as part of its collaborative mastery of the developing business activity. The development of the business activity is at the same time a practical and an epistemic object for the community. This kind of changing object cannot be mastered by using learning practices designed for the stable conditions of mass production, because learning has to change along the changes in business. According to the model introduced in this thesis, the transformation of learning proceeds through specific stages: predefined learning tasks are first transformed into learning through re-conceptualizing the object of the activity and of the joint learning and then, as the new object becomes stabilized, into the creation of new kinds of learning practices to master the re-defined object of the activity. This transformation of the form of learning is realized through a stepwise expansion of the work community’s agency. To summarize, the conceptual model developed in this study sets the tool-mediated co-development of the subject and the object of learning as the theoretical starting point for developing new, second-generation knowledge management methods. Key words: knowledge management, learning practice, organizational system of learning, agency
Resumo:
Multiple sclerosis (MS) is a chronic, inflammatory disease of the central nervous system, characterized especially by myelin and axon damage. Cognitive impairment in MS is common but difficult to detect without a neuropsychological examination. Valid and reliable methods are needed in clinical practice and research to detect deficits, follow their natural evolution, and verify treatment effects. The Paced Auditory Serial Addition Test (PASAT) is a measure of sustained and divided attention, working memory, and information processing speed, and it is widely used in MS patients neuropsychological evaluation. Additionally, the PASAT is the sole cognitive measure in an assessment tool primarly designed for MS clinical trials, the Multiple Sclerosis Functional Composite (MSFC). The aims of the present study were to determine a) the frequency, characteristics, and evolution of cognitive impairment among relapsing-remitting MS patients, and b) the validity and reliability of the PASAT in measuring cognitive performance in MS patients. The subjects were 45 relapsing-remitting MS patients from Seinäjoki Central Hospital, Department of Neurology and 48 healthy controls. Both groups underwent comprehensive neuropsychological assessments, including the PASAT, twice in a one-year follow-up, and additionally a sample of 10 patients and controls were evaluated with the PASAT in serial assessments five times in one month. The frequency of cognitive dysfunction among relapsing-remitting MS patients in the present study was 42%. Impairments were characterized especially by slowed information processing speed and memory deficits. During the one-year follow-up, the cognitive performance was relatively stable among MS patients on a group level. However, the practice effects in cognitive tests were less pronounced among MS patients than healthy controls. At an individual level the spectrum of MS patients cognitive deficits was wide in regards to their characteristics, severity, and evolution. The PASAT was moderately accurate in detecting MS-associated cognitive impairment, and 69% of patients were correctly classified as cognitively impaired or unimpaired when comprehensive neuropsychological assessment was used as a "gold standard". Self-reported nervousness and poor arithmetical skills seemed to explain misclassifications. MS-related fatigue was objectively demonstrated as fading performance towards the end of the test. Despite the observed practice effect, the reliability of the PASAT was excellent, and it was sensitive to the cognitive decline taking place during the follow-up in a subgroup of patients. The PASAT can be recommended for use in the neuropsychological assessment of MS patients. The test is fairly sensitive, but less specific; consequently, the reasons for low scores have to be carefully identified before interpreting them as clinically significant.