20 resultados para Intraday volatility
em Université de Lausanne, Switzerland
Resumo:
Préface My thesis consists of three essays where I consider equilibrium asset prices and investment strategies when the market is likely to experience crashes and possibly sharp windfalls. Although each part is written as an independent and self contained article, the papers share a common behavioral approach in representing investors preferences regarding to extremal returns. Investors utility is defined over their relative performance rather than over their final wealth position, a method first proposed by Markowitz (1952b) and by Kahneman and Tversky (1979), that I extend to incorporate preferences over extremal outcomes. With the failure of the traditional expected utility models in reproducing the observed stylized features of financial markets, the Prospect theory of Kahneman and Tversky (1979) offered the first significant alternative to the expected utility paradigm by considering that people focus on gains and losses rather than on final positions. Under this setting, Barberis, Huang, and Santos (2000) and McQueen and Vorkink (2004) were able to build a representative agent optimization model which solution reproduced some of the observed risk premium and excess volatility. The research in behavioral finance is relatively new and its potential still to explore. The three essays composing my thesis propose to use and extend this setting to study investors behavior and investment strategies in a market where crashes and sharp windfalls are likely to occur. In the first paper, the preferences of a representative agent, relative to time varying positive and negative extremal thresholds are modelled and estimated. A new utility function that conciliates between expected utility maximization and tail-related performance measures is proposed. The model estimation shows that the representative agent preferences reveals a significant level of crash aversion and lottery-pursuit. Assuming a single risky asset economy the proposed specification is able to reproduce some of the distributional features exhibited by financial return series. The second part proposes and illustrates a preference-based asset allocation model taking into account investors crash aversion. Using the skewed t distribution, optimal allocations are characterized as a resulting tradeoff between the distribution four moments. The specification highlights the preference for odd moments and the aversion for even moments. Qualitatively, optimal portfolios are analyzed in terms of firm characteristics and in a setting that reflects real-time asset allocation, a systematic over-performance is obtained compared to the aggregate stock market. Finally, in my third article, dynamic option-based investment strategies are derived and illustrated for investors presenting downside loss aversion. The problem is solved in closed form when the stock market exhibits stochastic volatility and jumps. The specification of downside loss averse utility functions allows corresponding terminal wealth profiles to be expressed as options on the stochastic discount factor contingent on the loss aversion level. Therefore dynamic strategies reduce to the replicating portfolio using exchange traded and well selected options, and the risky stock.
Resumo:
Abstract Market prices of corporate bond spreads and of credit default swap (CDS) rates do not match each other. In this paper, we argue that the liquidity premium, the cheapest-to-deliver (CTD) option and actual market segmentation explain the pricing differences. Using the European transaction data from Reuters and Bloomberg, we estimate the liquidity premium that is time- varying and firm-specific. We show that when time-dependent liquidity premiums are considered, corporate bond spreads and CDS rates behave in a much closer way than previous studies have shown. We find that high equity volatility drives pricing differences that can be explained by the CTD option.
Resumo:
This article analyses stability and volatility of party preferences using data from the Swiss Household-Panel (SHP), which, for the first time, allow studying transitions and stability of voters over several years in Switzerland. Analyses cover the years 1999- 2007 and systematically distinguish changes between party blocks and changes within party blocks. The first part looks at different patterns of change, which show relatively high volatility. The second part tests several theories on causes of such changes applying a multinomial random-effects model. Results show that party preferences stabilise with their duration and with age and that the electoral cycle, political sophistication, socio-structural predispositions, the household-context as well as party size and the number of parties each explain part of electoral volatility. Different results for withinand between party-block changes underlie the importance of that differentiation.
Resumo:
Syrian dry areas have been for several millennia a place of interaction between human populations and the environment. If environmental constraints and heterogeneity condition the human occupation and exploitation of resources, socio-political, economic and historical elements play a fundamental role. Since the late 1980s, Syrian dry areas are viewed as suffering a serious water crisis, due to groundwater overdraft. The Syrian administration and international development agencies believe that groundwater overexploitation is also leading to a decline of agricultural activities and to poverty increase. Action is thus required to address these problems.However, the overexploitation diagnosis needs to be reviewed. The overexploitation discourse appears in the context of Syria's opening to international organizations and to the market economy. It echoes the international discourse of "global water crisis". The diagnosis is based on national indicators recycling old Soviet data that has not been updated. In the post-Soviet era, the Syrian national water policy seems to abandon large surface water irrigation projects in favor of a strategy of water use rationalization and groundwater conservation in crisis regions, especially in the district of Salamieh.This groundwater conservation policy has a number of inconsistencies. It is justified for the administration and also probably for international donors, since it responds to an indisputable environmental emergency. However, efforts to conserve water are anecdotal or even counterproductive. The water conservation policy appears a posteriori as an extension of the national policy of food self-sufficiency. The dominant interpretation of overexploitation, and more generally of the water crisis, prevents any controversary approach of the status of resources and of the agricultural system in general and thus destroys any attempt to discuss alternatives with respect to groundwater management, allocation, and their inclusion in development programs.A revisited diagnosis of the situation needs to take into account spatial and temporal dimensions of the groundwater exploitation and to analyze the co-evolution of hydrogeological and agricultural systems. It should highlight the adjustments adopted to cope with environmental and economic variability, changes of water availability and regulatory measures enforcements. These elements play an important role for water availability and for the spatial, temporal, sectoral allocation of water resource. The groundwater exploitation in the last century has obviously had an impact on the environment, but the changes are not necessarily catastrophic.The current groundwater use in central Syria increases the uncertainty by reducing the ability of aquifers to buffer climatic changes. However, the climatic factor is not the only source of uncertainty. The high volatility of commodity prices, fuel, land and water, depending on the market but also on the will (and capacity) of the Syrian State to preserve social peace is a strong source of uncertainty. The research should consider the whole range of possibilities and propose alternatives that take into consideration the risks they imply for the water users, the political will to support or not the local access to water - thus involving a redefinition of the economic and social objectives - and finally the ability of international organizations to reconsider pre-established diagnoses.
Resumo:
Executive Summary The first essay of this dissertation investigates whether greater exchange rate uncertainty (i.e., variation over time in the exchange rate) fosters or depresses the foreign investment of multinational firms. In addition to the direct capital financing it supplies, foreign investment can be a source of valuable technology and know-how, which can have substantial positive effects on a host country's economic growth. Thus, it is critically important for policy makers and central bankers, among others, to understand how multinationals base their investment decisions on the characteristics of foreign exchange markets. In this essay, I first develop a theoretical framework to improve our knowledge regarding how the aggregate level of foreign investment responds to exchange rate uncertainty when an economy consists of many firms, each of which is making decisions. The analysis predicts a U-shaped effect of exchange rate uncertainty on the total level of foreign investment of the economy. That is, the effect is negative for low levels of uncertainty and positive for higher levels of uncertainty. This pattern emerges because the relationship between exchange rate volatility and 'the probability of investment is negative for firms with low productivity at home (i.e., firms that find it profitable to invest abroad) and the relationship is positive for firms with high productivity at home (i.e., firms that prefer exporting their product). This finding stands in sharp contrast to predictions in the existing literature that consider a single firm's decision to invest in a unique project. The main contribution of this research is to show that the aggregation over many firms produces a U-shaped pattern between exchange rate uncertainty and the probability of investment. Using data from industrialized countries for the period of 1982-2002, this essay offers a comprehensive empirical analysis that provides evidence in support of the theoretical prediction. In the second essay, I aim to explain the time variation in sovereign credit risk, which captures the risk that a government may be unable to repay its debt. The importance of correctly evaluating such a risk is illustrated by the central role of sovereign debt in previous international lending crises. In addition, sovereign debt is the largest asset class in emerging markets. In this essay, I provide a pricing formula for the evaluation of sovereign credit risk in which the decision to default on sovereign debt is made by the government. The pricing formula explains the variation across time in daily credit spreads - a widely used measure of credit risk - to a degree not offered by existing theoretical and empirical models. I use information on a country's stock market to compute the prevailing sovereign credit spread in that country. The pricing formula explains a substantial fraction of the time variation in daily credit spread changes for Brazil, Mexico, Peru, and Russia for the 1998-2008 period, particularly during the recent subprime crisis. I also show that when a government incentive to default is allowed to depend on current economic conditions, one can best explain the level of credit spreads, especially during the recent period of financial distress. In the third essay, I show that the risk of sovereign default abroad can produce adverse consequences for the U.S. equity market through a decrease in returns and an increase in volatility. The risk of sovereign default, which is no longer limited to emerging economies, has recently become a major concern for financial markets. While sovereign debt plays an increasing role in today's financial environment, the effects of sovereign credit risk on the U.S. financial markets have been largely ignored in the literature. In this essay, I develop a theoretical framework that explores how the risk of sovereign default abroad helps explain the level and the volatility of U.S. equity returns. The intuition for this effect is that negative economic shocks deteriorate the fiscal situation of foreign governments, thereby increasing the risk of a sovereign default that would trigger a local contraction in economic growth. The increased risk of an economic slowdown abroad amplifies the direct effect of these shocks on the level and the volatility of equity returns in the U.S. through two channels. The first channel involves a decrease in the future earnings of U.S. exporters resulting from unfavorable adjustments to the exchange rate. The second channel involves investors' incentives to rebalance their portfolios toward safer assets, which depresses U.S. equity prices. An empirical estimation of the model with monthly data for the 1994-2008 period provides evidence that the risk of sovereign default abroad generates a strong leverage effect during economic downturns, which helps to substantially explain the level and the volatility of U.S. equity returns.
Resumo:
We extend PML theory to account for information on the conditional moments up to order four, but without assuming a parametric model, to avoid a risk of misspecification of the conditional distribution. The key statistical tool is the quartic exponential family, which allows us to generalize the PML2 and QGPML1 methods proposed in Gourieroux et al. (1984) to PML4 and QGPML2 methods, respectively. An asymptotic theory is developed. The key numerical tool that we use is the Gauss-Freud integration scheme that solves a computational problem that has previously been raised in several fields. Simulation exercises demonstrate the feasibility and robustness of the methods [Authors]
Resumo:
Among the PAH class of compounds, high molecular weight PAH are now considered as relevant cancer inducers, but not all of them have the same biological activity. However, their analysis is difficult, mainly due to the presence of numerous isomers and due to their low volatility. Retention indices (Ri) for 13 dibenzopyrenes and homologues were determined by high-resolution capillary gas chromatography (GC) with four different stationary phases: a 5% phenyl-substituted methylpolysiloxane column (DB-5 ms), a 35% phenyl-substituted methylpolysiloxane column (BPX-35), a 50% phenyl-substituted methylpolysiloxane column (BPX-50), and a 35% trifluoropropylmethyl polysiloxane stationary phase (Rtx-200). Correlations for retention on each phase were investigated by using 8 independent molecular descriptors. Ri has been shown to be linearly correlated to PAH volume, polarisability alpha, Hückel-pi energy on the four examined columns. Ionisation potential Ip is a fourth variable which improves the regression model for DB-5ms, BPX-35, and BPX-50 column. Correlation coefficients ranging from r2 = 0.935 to r2 = 0.952 are then observed. Application of these indices to the identification and quantification of PAH with MW 302 in certified diesel particulate matter SRM 1650a is presented and discussed. [Authors]
Resumo:
ABSTRACT : Research in empirical asset pricing has pointed out several anomalies both in the cross section and time series of asset prices, as well as in investors' portfolio choice. This dissertation aims to discover the forces driving some of these "puzzling" asset pricing dynamics and portfolio decisions observed in the financial market. Through the dissertation I construct and study dynamic general equilibrium models of heterogeneous investors in the presence of frictions and evaluate quantitatively their implications for financial-market asset prices and portfolio choice. I also explore the potential roots of puzzles in international finance. Chapter 1 shows that, by introducing jointly endogenous no-default type of borrowing constraints and heterogeneous beliefs in a dynamic general-equilibrium economy, many empirical features of stock return volatility can be reproduced. While most of the research on stock return volatility is empirical, this paper provides a theoretical framework that is able to reproduce simultaneously the cross section and time series stylized facts concerning stock returns and their volatility. In contrast to the existing theoretical literature related to stock return volatility, I don't impose persistence or regimes in any of the exogenous state variables or in preferences. Volatility clustering, asymmetry in the stock return-volatility relationship, and pricing of multi-factor volatility components in the cross section all arise endogenously as a consequence of the feedback between the binding of no-default constraints and heterogeneous beliefs. Chapters 2 and 3 explore the implications of differences of opinion across investors in different countries for international asset pricing anomalies. Chapter 2 demonstrates that several international finance "puzzles" can be reproduced by a single risk factor which captures heterogeneous beliefs across international investors. These puzzles include: (i) home equity preference; (ii) the dependence of firm returns on local and foreign factors; (iii) the co-movement of returns and international capital flows; and (iv) abnormal returns around foreign firm cross-listing events in the local market. These are reproduced in a setup with symmetric information and in a perfectly integrated world with multiple countries and independent processes producing the same good. Chapter 3 shows that by extending this framework to multiple goods and correlated production processes; the "forward premium puzzle" arises naturally as a compensation for the heterogeneous expectations about the depreciation of the exchange rate held by international investors. Chapters 2 and 3 propose differences of opinion across international investors as the potential resolution of several international finance `puzzles'. In a globalized world where both capital and information flow freely across countries, this explanation seems more appealing than existing asymmetric information or segmented markets theories aiming to explain international finance puzzles.
Resumo:
In studies of the natural history of HIV-1 infection, the time scale of primary interest is the time since infection. Unfortunately, this time is very often unknown for HIV infection and using the follow-up time instead of the time since infection is likely to provide biased results because of onset confounding. Laboratory markers such as the CD4 T-cell count carry important information concerning disease progression and can be used to predict the unknown date of infection. Previous work on this topic has made use of only one CD4 measurement or based the imputation on incident patients only. However, because of considerable intrinsic variability in CD4 levels and because incident cases are different from prevalent cases, back calculation based on only one CD4 determination per person or on characteristics of the incident sub-cohort may provide unreliable results. Therefore, we propose a methodology based on the repeated individual CD4 T-cells marker measurements that use both incident and prevalent cases to impute the unknown date of infection. Our approach uses joint modelling of the time since infection, the CD4 time path and the drop-out process. This methodology has been applied to estimate the CD4 slope and impute the unknown date of infection in HIV patients from the Swiss HIV Cohort Study. A procedure based on the comparison of different slope estimates is proposed to assess the goodness of fit of the imputation. Results of simulation studies indicated that the imputation procedure worked well, despite the intrinsic high volatility of the CD4 marker.
Resumo:
BACKGROUND: Preventive treatment may avoid future cases of tuberculosis among asylum seekers. The effectiveness of preventive treatment depends in large part on treatment completion. METHODS: In a prospective cohort study, asylum seekers of two of the Swiss Canton Vaud migration centres were screened with the Interferon Gamma Release Assay (IGRA). Those with a positive IGRA were referred for medical examination. Individuals with active or past tuberculosis were excluded. Preventive treatment was offered to all participants with positive IGRA but without active tuberculosis. The adherence was assessed during monthly follow-up. RESULTS: From a population of 393 adult migrants, 98 (24.9%) had a positive IGRA. Eleven did not attend the initial medical assessment. Of the 87 examined, eight presented with pulmonary disease (five of them received a full course of antituberculous therapy), two had a history of prior tuberculosis treatment and two had contraindications to treatment. Preventive treatment was offered to 75 individuals (4 months rifampicin in 74 and 9 months isoniazid in one), of whom 60 (80%) completed the treatment. CONCLUSIONS: The vulnerability and the volatility of this population make screening and observance of treatment difficult. It seems possible to obtain a high rate of completion using a short course of treatment in a closely monitored population living in stable housing conditions.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
Capillary electrophoresis has drawn considerable attention in the past few years, particularly in the field of chiral separations because of its high separation efficiency. However, its routine use in therapeutic drug monitoring is hampered by its low sensitivity due to a short optical path. We have developed a capillary zone electrophoresis (CZE) method using 2mM of hydroxypropyl-β-cyclodextrin as a chiral selector, which allows base-to-base separation of the enantiomers of mianserin (MIA), desmethylmianserin (DMIA), and 8-hydroxymianserin (OHMIA). Through the use of an on-column sample concentration step after liquid-liquid extraction from plasma and through the presence of an internal standard, the quantitation limits were found to be 5 ng/mL for each enantiomer of MIA and DMIA and 15 ng/mL for each enantiomer of OHMIA. To our knowledge, this is the first published CE method that allows its use for therapeutic monitoring of antidepressants due to its sensitivity down to the low nanogram range. The variability of the assays, as assessed by the coefficients of variation (CV) measured at two concentrations for each substance, ranged from 2 to 14% for the intraday (eight replicates) and from 5 to 14% for the interday (eight replicates) experiments. The deviations from the theoretical concentrations, which represent the accuracy of the method, were all within 12.5%. A linear response was obtained for all compounds within the range of concentrations used for the calibration curves (10-150 ng/mL for each enantiomer of MIA and DMIA and 20-300 ng/mL for each enantiomer of OHMIA). Good correlations were calculated between [(R) + (S)]-MIA and DMIA concentrations measured in plasma samples of 20 patients by a nonchiral gas chromatography method and CZE, and between the (R)- and (S)-concentrations of MIA and DMIA measured in plasma samples of 37 patients by a previously described chiral high-performance liquid chromatography method and CZE. Finally, no interference was noted from more than 20 other psychotropic drugs. Thus, this method, which is both sensitive and selective, can be routinely used for therapeutic monitoring of the enantiomers of MIA and its metabolites. It could be very useful due to the demonstrated interindividual variability of the stereoselective metabolism of MIA.
Resumo:
A simple method determining airborne monoethanolamine has been developed. Monoethanolamine determination has traditionally been difficult due to analytical separation problems. Even in recent sophisticated methods, this difficulty remains as the major issue often resulting in time-consuming sample preparations. Impregnated glass fiber filters were used for sampling. Desorption of monoethanolamine was followed by capillary GC analysis and nitrogen phosphorous selective detection. Separation was achieved using a specific column for monoethanolamines (35% diphenyl and 65% dimethyl polysiloxane). The internal standard was quinoline. Derivatization steps were not needed. The calibration range was 0.5-80 μg/mL with a good correlation (R(2) = 0.996). Averaged overall precisions and accuracies were 4.8% and -7.8% for intraday (n = 30), and 10.5% and -5.9% for interday (n = 72). Mean recovery from spiked filters was 92.8% for the intraday variation, and 94.1% for the interday variation. Monoethanolamine on stored spiked filters was stable for at least 4 weeks at 5°C. This newly developed method was used among professional cleaners and air concentrations (n = 4) were 0.42 and 0.17 mg/m(3) for personal and 0.23 and 0.43 mg/m(3) for stationary measurements. The monoethanolamine air concentration method described here was simple, sensitive, and convenient both in terms of sampling and analytical analysis.