12 resultados para Nonrandom two-liquid model

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The educational reform, launched in Finland in 2008, concerns the implementation of the Special Education Strategy (Opetusministeriö 2007) under an improvement initiative called Kelpo. One of the main proposed alterations of the Strategy relates to the support system of comprehensive school pupils. The existed two-level model (general and special support) is to be altered by the new three-level model (general, intensified and special support). There are 233 municipalities involved nationwide in the Kelpo initiative, each of which has a municipal coordinator as a national delegate. The Centre for Educational Assessment [the Centre] at the University of Helsinki, led by Professor Jarkko Hautamäki, carries out the developmental assessment of the initiative’s developmental process. As a part of that assessment the Centre interviewed 151 municipal coordinators in November 2008. This thesis considers the Kelpo initiative from Michael Fullan’s change theory’s aspect. The aim is to identify the change theoretical factors in the speech of the municipal coordinators interviewed by the Centre, and to constitute a view of what the crucial factors in the reform implementation process are. The appearance of the change theoretical factors, in the coordinators’ speech, and the meaning of these appearances are being considered from the change process point of view. The Centre collected the data by interviewing the municipal coordinators (n=151) in small groups of 4-11 people. The interview method was based on Vesala and Rantanen’s (2007) qualitative attitude survey method which was adapted and evolved for the Centre’s developmental assessment by Hilasvuori. The method of the analysis was a qualitative theory-based content analysis, processed using the Atlas.ti software. The theoretical frame of reference was grounded on Fullan’s change theory and the analysis was based on three change theoretical categories: implementation, cooperation and perspectives in the change process. The analysis of the interview data revealed spoken expressions in the coordinators’ speech which were either positively or negatively related to the theoretical categories. On the grounds of these change theoretical relations the existence of the change process was observed. The crucial factors of reform implementation were found, and the conclusion is that the encounter of the new reform-based and already existing strategies in school produces interface challenges. These challenges are particularly confronted in the context of the implementation of the new three-level support model. The interface challenges are classified as follows: conceptual, method-based, action-based and belief-based challenges. Keywords: reform, implementation, change process, Michael Fullan, Kelpo, intensified support, special support

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Colorectal cancer (CRC) is one of the most frequent malignancies in Western countries. Inherited factors have been suggested to be involved in 35% of CRCs. The hereditary CRC syndromes explain only ~6% of all CRCs, indicating that a large proportion of the inherited susceptibility is still unexplained. Much of the remaining genetic predisposition for CRC is probably due to undiscovered low-penetrance variations. This study was conducted to identify germline and somatic changes that contribute to CRC predisposition and tumorigenesis. MLH1 and MSH2, that underlie Hereditary non-polyposis colorectal cancer (HNPCC) are considered to be tumor suppressor genes; the first hit is inherited in the germline and somatic inactivation of the wild type allele is required for tumor initiation. In a recent study, frequent loss of the mutant allele in HNPCC tumors was detected and a new model, arguing against the two-hit hypothesis, was proposed for somatic HNPCC tumorigenesis. We tested this hypothesis by conducting LOH analysis on 25 colorectal HNPCC tumors with a known germline mutation in the MLH1 or MSH2 genes. LOH was detected in 56% of the tumors. All the losses targeted the wild type allele supporting the classical two-hit model for HNPCC tumorigenesis. The variants 3020insC, R702W and G908R in NOD2 predispose to Crohn s disease. Contribution of NOD2 to CRC predisposition has been examined in several case-control series, with conflicting results. We have previously shown that 3020insC does not predispose to CRC in Finnish CRC patients. To expand our previous study the variants R702W and G908R were genotyped in a population-based series of 1042 Finnish CRC patients and 508 healthy controls. Association analyses did not show significant evidence for association of the variants with CRC. Single nucleotide polymorphism (SNP) rs6983267 at chromosome 8q24 was the first CRC susceptibility variant identified through genome-wide association studies. To characterize the role of rs6983267 in CRC predisposition in the Finnish population, we genotyped the SNP in the case-control material of 1042 cases and 1012 controls and showed that G allele of rs6983267 is associated with the increased risk of CRC (OR 1.22; P=0.0018). Examination of allelic imbalance in the tumors heterozygous for rs6983267 revealed that copy number increase affected 22% of the tumors and interestingly, it favored the G allele. By utilizing a computer algorithm, Enhancer Element Locator (EEL), an evolutionary conserved regulatory motif containing rs6983267 was identified. The SNP affected the binding site of TCF4, a transcription factor that mediates Wnt signaling in cells, and has proven to be crucial in colorectal neoplasia. The preferential binding of TCF4 to the risk allele G was showed in vitro and in vivo. The element drove lacZ marker gene expression in mouse embryos in a pattern that is consistent with genes regulated by the Wnt signaling pathway. These results suggest that rs6983267 at 8q24 exerts its effect in CRC predisposition by regulating gene expression. The most obvious target gene for the enhancer element is MYC, residing ~335 kb downstream, however further studies are required to establish the transcriptional target(s) of the predicted enhancer element.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of remote sensing imagery as auxiliary data in forest inventory is based on the correlation between features extracted from the images and the ground truth. The bidirectional reflectance and radial displacement cause variation in image features located in different segments of the image but forest characteristics remaining the same. The variation has so far been diminished by different radiometric corrections. In this study the use of sun azimuth based converted image co-ordinates was examined to supplement auxiliary data extracted from digitised aerial photographs. The method was considered as an alternative for radiometric corrections. Additionally, the usefulness of multi-image interpretation of digitised aerial photographs in regression estimation of forest characteristics was studied. The state owned study area located in Leivonmäki, Central Finland and the study material consisted of five digitised and ortho-rectified colour-infrared (CIR) aerial photographs and field measurements of 388 plots, out of which 194 were relascope (Bitterlich) plots and 194 were concentric circular plots. Both the image data and the field measurements were from the year 1999. When examining the effect of the location of the image point on pixel values and texture features of Finnish forest plots in digitised CIR photographs the clearest differences were found between front-and back-lighted image halves. Inside the image half the differences between different blocks were clearly bigger on the front-lighted half than on the back-lighted half. The strength of the phenomenon varied by forest category. The differences between pixel values extracted from different image blocks were greatest in developed and mature stands and smallest in young stands. The differences between texture features were greatest in developing stands and smallest in young and mature stands. The logarithm of timber volume per hectare and the angular transformation of the proportion of broadleaved trees of the total volume were used as dependent variables in regression models. Five different converted image co-ordinates based trend surfaces were used in models in order to diminish the effect of the bidirectional reflectance. The reference model of total volume, in which the location of the image point had been ignored, resulted in RMSE of 1,268 calculated from test material. The best of the trend surfaces was the complete third order surface, which resulted in RMSE of 1,107. The reference model of the proportion of broadleaved trees resulted in RMSE of 0,4292 and the second order trend surface was the best, resulting in RMSE of 0,4270. The trend surface method is applicable, but it has to be applied by forest category and by variable. The usefulness of multi-image interpretation of digitised aerial photographs was studied by building comparable regression models using either the front-lighted image features, back-lighted image features or both. The two-image model turned out to be slightly better than the one-image models in total volume estimation. The best one-image model resulted in RMSE of 1,098 and the two-image model resulted in RMSE of 1,090. The homologous features did not improve the models of the proportion of broadleaved trees. The overall result gives motivation for further research of multi-image interpretation. The focus may be improving regression estimation and feature selection or examination of stratification used in two-phase sampling inventory techniques. Keywords: forest inventory, digitised aerial photograph, bidirectional reflectance, converted image co­ordinates, regression estimation, multi-image interpretation, pixel value, texture, trend surface

Relevância:

100.00% 100.00%

Publicador:

Resumo:

What can the statistical structure of natural images teach us about the human brain? Even though the visual cortex is one of the most studied parts of the brain, surprisingly little is known about how exactly images are processed to leave us with a coherent percept of the world around us, so we can recognize a friend or drive on a crowded street without any effort. By constructing probabilistic models of natural images, the goal of this thesis is to understand the structure of the stimulus that is the raison d etre for the visual system. Following the hypothesis that the optimal processing has to be matched to the structure of that stimulus, we attempt to derive computational principles, features that the visual system should compute, and properties that cells in the visual system should have. Starting from machine learning techniques such as principal component analysis and independent component analysis we construct a variety of sta- tistical models to discover structure in natural images that can be linked to receptive field properties of neurons in primary visual cortex such as simple and complex cells. We show that by representing images with phase invariant, complex cell-like units, a better statistical description of the vi- sual environment is obtained than with linear simple cell units, and that complex cell pooling can be learned by estimating both layers of a two-layer model of natural images. We investigate how a simplified model of the processing in the retina, where adaptation and contrast normalization take place, is connected to the nat- ural stimulus statistics. Analyzing the effect that retinal gain control has on later cortical processing, we propose a novel method to perform gain control in a data-driven way. Finally we show how models like those pre- sented here can be extended to capture whole visual scenes rather than just small image patches. By using a Markov random field approach we can model images of arbitrary size, while still being able to estimate the model parameters from the data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This licentiate's thesis analyzes the macroeconomic effects of fiscal policy in a small open economy under a flexible exchange rate regime, assuming that the government spends exclusively on domestically produced goods. The motivation for this research comes from the observation that the literature on the new open economy macroeconomics (NOEM) has focused almost exclusively on two-country global models and the analyses of the effects of fiscal policy on small economies are almost completely ignored. This thesis aims at filling in the gap in the NOEM literature and illustrates how the macroeconomic effects of fiscal policy in a small open economy depend on the specification of preferences. The research method is to present two theoretical model that are extensions to the model contained in the Appendix to Obstfeld and Rogoff (1995). The first model analyzes the macroeconomic effects of fiscal policy, making use of a model that exploits the idea of modelling private and government consumption as substitutes in private utility. The model offers intuitive predictions on how the effects of fiscal policy depend on the marginal rate of substitution between private and government consumption. The findings illustrate that the higher the substitutability between private and government consumption, (i) the bigger is the crowding out effect on private consumption (ii) and the smaller is the positive effect on output. The welfare analysis shows that the less fiscal policy decreases welfare the higher is the marginal rate of substitution between private and government consumption. The second model of this thesis studies how the macroeconomic effects of fiscal policy depend on the elasticity of substitution between traded and nontraded goods. This model reveals that this elasticity a key variable to explain the exchange rate, current account and output response to a permanent rise in government spending. Finally, the model demonstrates that temporary changes in government spending are an effective stabilization tool when used wisely and timely in response to undesired fluctuations in output. Undesired fluctuations in output can be perfectly offset by an opposite change in government spending without causing any side-effects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To a large extent, lakes can be described with a one-dimensional approach, as their main features can be characterized by the vertical temperature profile of the water. The development of the profiles during the year follows the seasonal climate variations. Depending on conditions, lakes become stratified during the warm summer. After cooling, overturn occurs, water cools and an ice cover forms. Typically, water is inversely stratified under the ice, and another overturn occurs in spring after the ice has melted. Features of this circulation have been used in studies to distinguish between lakes in different areas, as basis for observation systems and even as climate indicators. Numerical models can be used to calculate temperature in the lake, on the basis of the meteorological input at the surface. The simple form is to solve the surface temperature. The depth of the lake affects heat transfer, together with other morphological features, the shape and size of the lake. Also the surrounding landscape affects the formation of the meteorological fields over the lake and the energy input. For small lakes the shading by the shores affects both over the lake and inside the water body bringing limitations for the one-dimensional approach. A two-layer model gives an approximation for the basic stratification in the lake. A turbulence model can simulate vertical temperature profile in a more detailed way. If the shape of the temperature profile is very abrupt, vertical transfer is hindered, having many important consequences for lake biology. One-dimensional modelling approach was successfully studied comparing a one-layer model, a two-layer model and a turbulence model. The turbulence model was applied to lakes with different sizes, shapes and locations. Lake models need data from the lakes for model adjustment. The use of the meteorological input data on different scales was analysed, ranging from momentary turbulent changes over the lake to the use of the synoptical data with three hour intervals. Data over about 100 past years were used on the mesoscale at the range of about 100 km and climate change scenarios for future changes. Increasing air temperature typically increases water temperature in epilimnion and decreases ice cover. Lake ice data were used for modelling different kinds of lakes. They were also analyzed statistically in global context. The results were also compared with results of a hydrological watershed model and data from very small lakes for seasonal development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research has been prompted by an interest in the atmospheric processes of hydrogen. The sources and sinks of hydrogen are important to know, particularly if hydrogen becomes more common as a replacement for fossil fuel in combustion. Hydrogen deposition velocities (vd) were estimated by applying chamber measurements, a radon tracer method and a two-dimensional model. These three approaches were compared with each other to discover the factors affecting the soil uptake rate. A static-closed chamber technique was introduced to determine the hydrogen deposition velocity values in an urban park in Helsinki, and at a rural site at Loppi. A three-day chamber campaign to carry out soil uptake estimation was held at a remote site at Pallas in 2007 and 2008. The atmospheric mixing ratio of molecular hydrogen has also been measured by a continuous method in Helsinki in 2007 - 2008 and at Pallas from 2006 onwards. The mean vd values measured in the chamber experiments in Helsinki and Loppi were between 0.0 and 0.7 mm s-1. The ranges of the results with the radon tracer method and the two-dimensional model were 0.13 - 0.93 mm s-1 and 0.12 - 0.61 mm s-1, respectively, in Helsinki. The vd values in the three-day campaign at Pallas were 0.06 - 0.52 mm s-1 (chamber) and 0.18 - 0.52 mm s-1 (radon tracer method and two-dimensional model). At Kumpula, the radon tracer method and the chamber measurements produced higher vd values than the two-dimensional model. The results of all three methods were close to each other between November and April, except for the chamber results from January to March, while the soil was frozen. The hydrogen deposition velocity values of all three methods were compared with one-week cumulative rain sums. Precipitation increases the soil moisture, which decreases the soil uptake rate. The measurements made in snow seasons showed that a thick snow layer also hindered gas diffusion, lowering the vd values. The H2 vd values were compared to the snow depth. A decaying exponential fit was obtained as a result. During a prolonged drought in summer 2006, soil moisture values were lower than in other summer months between 2005 and 2008. Such conditions were prevailing in summer 2006 when high chamber vd values were measured. The mixing ratio of molecular hydrogen has a seasonal variation. The lowest atmospheric mixing ratios were found in the late autumn when high deposition velocity values were still being measured. The carbon monoxide (CO) mixing ratio was also measured. Hydrogen and carbon monoxide are highly correlated in an urban environment, due to the emissions originating from traffic. After correction for the soil deposition of H2, the slope was 0.49±0.07 ppb (H2) / ppb (CO). Using the corrected hydrogen-to-carbon-monoxide ratio, the total hydrogen load emitted by Helsinki traffic in 2007 was 261 t (H2) a-1. Hydrogen, methane and carbon monoxide are connected with each other through the atmospheric methane oxidation process, in which formaldehyde is produced as an important intermediate. The photochemical degradation of formaldehyde produces hydrogen and carbon monoxide as end products. Examination of back-trajectories revealed long-range transportation of carbon monoxide and methane. The trajectories can be grouped by applying cluster and source analysis methods. Thus natural and anthropogenic emission sources can be separated by analyzing trajectory clusters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Veri-aivoeste suojelee aivoja verenkierron vierasaineilta. Veri-aivoestettä tutkivia in vivo ja in vitro -menetelmiä on raportoitu laajasti kirjallisuudessa. Yhdisteiden farmakokinetiikka aivoissa kuvaavia tietokonemalleja on esitetty vain muutamia. Tässä tutkimuksessa kerättiin kirjallisuudesta aineisto eri in vitro ja in vivo -menetelmillä määritetyistä veri-aivoesteen permeabiliteettikertoimista. Lisäksi tutkimuksessa rakennettiin kaksi veri-aivoesteen farmakokineettista tietokonemallia, mikrodialyysimalli ja efluksimalli. Mikrodialyysimalli on yksinkertainen kahdesta tilasta (verenkierto ja aivot) koostuva farmakokineettinen malli. Mikrodialyysimallilla simuloitiin in vivo määritettyjen parametrien perusteella viiden yhdisteen pitoisuuksia rotan aivoissa ja verenkierrossa. Mallilla ei saatu täsmällisesti in vivo -tilannetta vastaavia pitoisuuskuvaajia johtuen mallin rakenteessa tehdyistä yksinkertaistuksista, kuten aivokudostilan ja kuljetinproteiinien kinetiikan puuttuminen. Efluksimallissa on kolme tilaa, verenkierto, veri-aivoesteen endoteelisolutila ja aivot. Efluksimallilla tutkittiin teoreettisten simulaatioiden avulla veri-aivoesteen luminaalisella membraanilla sijaitsevan aktiivisen efluksiproteiinin ja passiivisen permeaation merkitystä yhdisteen pitoisuuksiin aivojen solunulkoisessa nesteessä. Tutkittava parametri oli vapaan yhdisteen pitoisuuksien suhde aivojen ja verenkierron välillä vakaassa tilassa (Kp,uu). Tuloksissa havaittiin efluksiproteiinin vaikutus pitoisuuksiin Michaelis-Mentenin kinetiikan mukaisesti. Efluksimalli sopii hyvin teoreettisten simulaatioiden tekemiseen. Malliin voidaan lisätä aktiivisia kuljettimia. Teoreettisten simulaatioiden avulla voidaan yhdistää in vitro ja in vivo tutkimuksien tuloksia ja osatekijöitä voidaan tutkia yhdessä simulaatiossa.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Comprehensive two-dimensional gas chromatography (GC×GC) offers enhanced separation efficiency, reliability in qualitative and quantitative analysis, capability to detect low quantities, and information on the whole sample and its components. These features are essential in the analysis of complex samples, in which the number of compounds may be large or the analytes of interest are present at trace level. This study involved the development of instrumentation, data analysis programs and methodologies for GC×GC and their application in studies on qualitative and quantitative aspects of GC×GC analysis. Environmental samples were used as model samples. Instrumental development comprised the construction of three versions of a semi-rotating cryogenic modulator in which modulation was based on two-step cryogenic trapping with continuously flowing carbon dioxide as coolant. Two-step trapping was achieved by rotating the nozzle spraying the carbon dioxide with a motor. The fastest rotation and highest modulation frequency were achieved with a permanent magnetic motor, and modulation was most accurate when the motor was controlled with a microcontroller containing a quartz crystal. Heated wire resistors were unnecessary for the desorption step when liquid carbon dioxide was used as coolant. With use of the modulators developed in this study, the narrowest peaks were 75 ms at base. Three data analysis programs were developed allowing basic, comparison and identification operations. Basic operations enabled the visualisation of two-dimensional plots and the determination of retention times, peak heights and volumes. The overlaying feature in the comparison program allowed easy comparison of 2D plots. An automated identification procedure based on mass spectra and retention parameters allowed the qualitative analysis of data obtained by GC×GC and time-of-flight mass spectrometry. In the methodological development, sample preparation (extraction and clean-up) and GC×GC methods were developed for the analysis of atmospheric aerosol and sediment samples. Dynamic sonication assisted extraction was well suited for atmospheric aerosols collected on a filter. A clean-up procedure utilising normal phase liquid chromatography with ultra violet detection worked well in the removal of aliphatic hydrocarbons from a sediment extract. GC×GC with flame ionisation detection or quadrupole mass spectrometry provided good reliability in the qualitative analysis of target analytes. However, GC×GC with time-of-flight mass spectrometry was needed in the analysis of unknowns. The automated identification procedure that was developed was efficient in the analysis of large data files, but manual search and analyst knowledge are invaluable as well. Quantitative analysis was examined in terms of calibration procedures and the effect of matrix compounds on GC×GC separation. In addition to calibration in GC×GC with summed peak areas or peak volumes, simplified area calibration based on normal GC signal can be used to quantify compounds in samples analysed by GC×GC so long as certain qualitative and quantitative prerequisites are met. In a study of the effect of matrix compounds on GC×GC separation, it was shown that quality of the separation of PAHs is not significantly disturbed by the amount of matrix and quantitativeness suffers only slightly in the presence of matrix and when the amount of target compounds is low. The benefits of GC×GC in the analysis of complex samples easily overcome some minor drawbacks of the technique. The developed instrumentation and methodologies performed well for environmental samples, but they could also be applied for other complex samples.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Internet has made possible the cost-effective dissemination of scientific journals in the form of electronic versions, usually in parallel with the printed versions. At the same time the electronic medium also makes possible totally new open access (OA) distribution models, funded by author charges, sponsorship, advertising, voluntary work, etc., where the end product is free in full text to the readers. Although more than 2,000 new OA journals have been founded in the last 15 years, the uptake of open access has been rather slow, with currently around 5% of all peer-reviewed articles published in OA journals. The slow growth can to a large extent be explained by the fact that open access has predominantly emerged via newly founded journals and startup publishers. Established journals and publishers have not had strong enough incentives to change their business models, and the commercial risks in doing so have been high. In this paper we outline and discuss two different scenarios for how scholarly publishers could change their operating model to open access. The first is based on an instantaneous change and the second on a gradual change. We propose a way to manage the gradual change by bundling traditional “big deal” licenses and author charges for opening access to individual articles.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Internet has made possible the cost-effective dissemination of scientific journals in the form of electronic versions, usually in parallel with the printed versions. At the same time the electronic medium also makes possible totally new open access (OA) distribution models, funded by author charges, sponsorship, advertising, voluntary work, etc., where the end product is free in full text to the readers. Although more than 2,000 new OA journals have been founded in the last 15 years, the uptake of open access has been rather slow, with currently around 5% of all peer-reviewed articles published in OA journals. The slow growth can to a large extent be explained by the fact that open access has predominantly emerged via newly founded journals and startup publishers. Established journals and publishers have not had strong enough incentives to change their business models, and the commercial risks in doing so have been high. In this paper we outline and discuss two different scenarios for how scholarly publishers could change their operating model to open access. The first is based on an instantaneous change and the second on a gradual change. We propose a way to manage the gradual change by bundling traditional “big deal” licenses and author charges for opening access to individual articles.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Foreign compounds, such as drugs are metabolised in the body in numerous reactions. Metabolic reactions are divided into phase I (functionalisation) and phase II (conjugation) reactions. Uridine diphosphoglucuronosyltransferase enzymes (UGTs) are important catalysts of phase II metabolic system. They catalyse the transfer of glucuronic acid to small lipophilic molecules and convert them to hydrophilic and polar glucuronides that are readily excreted from the body. Liver is the main site of drug metabolism. Many drugs are racemic mixtures of two enantiomers. Glucuronidation of a racemic compound yields a pair of diastereomeric glucuronides. Stereoisomers are interesting substrates in glucuronidation studies since some UGTs display stereoselectivity. Diastereomeric glucuronides of O-desmethyltramadol (M1) and entacapone were selected as model compounds in this work. The investigations of the thesis deal with enzymatic glucuronidation and the development of analytical methods for drug metabolites, particularly diastereomeric glucuronides. The glucuronides were analysed from complex biological matrices, such as urine or from in vitro incubation matrices. Various pretreatment techniques were needed to purify, concentrate and isolate the analytes of interest. Analyses were carried out by liquid chromatography (LC) with ultraviolet (UV) or mass spectrometric (MS) detection or with capillary electromigration techniques. Commercial glucuronide standards were not available for the studies. Enzyme-assisted synthesis with rat liver microsomes was therefore used to produce M1 glucuronides as reference compounds. The glucuronides were isolated by LC/UV and ultra performance liquid chromatography (UPLC)/MS, while tandem mass spectrometry (MS/MS) and nuclear magnetic resonance (NMR) spectroscopy were employed in structural characterisation. The glucuronides were identified as phenolic O-glucuronides of M1. To identify the active UGT enzymes in (±)-M1 glucuronidation recombinant human UGTs and human tissue microsomes were incubated with (±)-M1. The study revealed that several UGTs can catalyse (±)-M1 glucuronidation. Glucuronidation in human liver microsomes like in rat liver microsomes is stereoselective. The results of the studies showed that UGT2B7, most probably, is the main UGT responsible for (±)-M1 glucuronidation in human liver. Large variation in stereoselectivity of UGTs toward (±)-M1 enantiomers was observed. Formation of M1 glucuronides was monitored with a fast and selective UPLC/MS method. Capillary electromigration techniques are known for their high resolution power. A method that relied on capillary electrophoresis (CE) with UV detection was developed for the separation of tramadol and its free and glucuronidated metabolites. The suitability of the method to identify tramadol metabolites in an authentic urine samples was tested. Unaltered tramadol and four of its main metabolites were detected in the electropherogram. A micellar electrokinetic chromatography (MEKC) /UV method was developed for the separation of the glucuronides of entacapone in human urine. The validated method was tested in the analysis of urine samples of patients. The glucuronides of entacapone could be quantified after oral entacapone dosing.