50 resultados para First-order optimality condition
Resumo:
To study different temporal components on cancer mortality (age, period and cohort) methods of graphic representation were applied to Swiss mortality data from 1950 to 1984. Maps using continuous slopes ("contour maps") and based on eight tones of grey according to the absolute distribution of rates were used to represent the surfaces defined by the matrix of various age-specific rates. Further, progressively more complex regression surface equations were defined, on the basis of two independent variables (age/cohort) and a dependent one (each age-specific mortality rate). General patterns of trends in cancer mortality were thus identified, permitting definition of important cohort (e.g., upwards for lung and other tobacco-related neoplasms, or downwards for stomach) or period (e.g., downwards for intestines or thyroid cancers) effects, besides the major underlying age component. For most cancer sites, even the lower order (1st to 3rd) models utilised provided excellent fitting, allowing immediate identification of the residuals (e.g., high or low mortality points) as well as estimates of first-order interactions between the three factors, although the parameters of the main effects remained still undetermined. Thus, the method should be essentially used as summary guide to illustrate and understand the general patterns of age, period and cohort effects in (cancer) mortality, although they cannot conceptually solve the inherent problem of identifiability of the three components.
Resumo:
BACKGROUND: Bone graft substitute such as calcium sulfate are frequently used as carrier material for local antimicrobial therapy in orthopedic surgery. This study aimed to assess the systemic absorption and disposition of tobramycin in patients treated with a tobramycin-laden bone graft substitute (Osteoset® T). METHODS: Nine blood samples were taken from 12 patients over 10 days after Osteoset® T surgical implantation. Tobramycin concentration was measured by fluorescence polarization. Population pharmacokinetic analysis was performed using NONMEM to assess the average value and variability (CV) of pharmacokinetic parameters. Bioavailability (F) was assessed by equating clearance (CL) with creatinine clearance (Cockcroft CLCr). Based on the final model, simulations with various doses and renal function levels were performed. (ClinicalTrials.gov number, NCT01938417). RESULTS: The patients were 52 +/- 20 years old, their mean body weight was 73 +/- 17 kg and their mean CLCr was 119 +/- 55 mL/min. Either 10 g or 20 g Osteoset® T with 4% tobramycin sulfate was implanted in various sites. Concentration profiles remained low and consistent with absorption rate-limited first-order release, while showing important variability. With CL equated to CLCr, mean absorption rate constant (ka) was 0.06 h-1, F was 63% or 32% (CV 74%) for 10 and 20 g Osteoset® T respectively, and volume of distribution (V) was 16.6 L (CV 89%). Simulations predicted sustained high, potentially toxic concentrations with 10 g, 30 g and 50 g Osteoset® T for CLCr values below 10, 20 and 30 mL/min, respectively. CONCLUSIONS: Osteoset® T does not raise toxicity concerns in subjects without significant renal failure. The risk/benefit ratio might turn unfavorable in case of severe renal failure, even after standard dose implantation.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.
Resumo:
BACKGROUND: Aminoglycosides are mandatory in the treatment of severe infections in burns. However, their pharmacokinetics are difficult to predict in critically ill patients. Our objective was to describe the pharmacokinetic parameters of high doses of tobramycin administered at extended intervals in severely burned patients. METHODS: We prospectively enrolled 23 burned patients receiving tobramycin in combination therapy for Pseudomonas species infections in a burn ICU over 2 years in a therapeutic drug monitoring program. Trough and post peak tobramycin levels were measured to adjust drug dosage. Pharmacokinetic parameters were derived from two points first order kinetics. RESULTS: Tobramycin peak concentration was 7.4 (3.1-19.6)microg/ml and Cmax/MIC ratio 14.8 (2.8-39.2). Half-life was 6.9 (range 1.8-24.6)h with a distribution volume of 0.4 (0.2-1.0)l/kg. Clearance was 35 (14-121)ml/min and was weakly but significantly correlated with creatinine clearance. CONCLUSION: Tobramycin had a normal clearance, but an increased volume of distribution and a prolonged half-life in burned patients. However, the pharmacokinetic parameters of tobramycin are highly variable in burned patients. These data support extended interval administration and strongly suggest that aminoglycosides should only be used within a structured pharmacokinetic monitoring program.
Resumo:
This thesis is a compilation of projects to study sediment processes recharging debris flow channels. These works, conducted during my stay at the University of Lausanne, focus in the geological and morphological implications of torrent catchments to characterize debris supply, a fundamental element to predict debris flows. Other aspects of sediment dynamics are considered, e.g. the coupling headwaters - torrent, as well as the development of a modeling software that simulates sediment transfer in torrent systems. The sediment activity at Manival, an active torrent system of the northern French Alps, was investigated using terrestrial laser scanning and supplemented with geostructural investigations and a survey of sediment transferred in the main torrent. A full year of sediment flux could be observed, which coincided with two debris flows and several bedload transport events. This study revealed that both debris flows generated in the torrent and were preceded in time by recharge of material from the headwaters. Debris production occurred mostly during winter - early spring time and was caused by large slope failures. Sediment transfers were more puzzling, occurring almost exclusively in early spring subordinated to runoffconditions and in autumn during long rainfall. Intense rainstorms in summer did not affect debris storage that seems to rely on the stability of debris deposits. The morpho-geological implication in debris supply was evaluated using DEM and field surveys. A slope angle-based classification of topography could characterize the mode of debris production and transfer. A slope stability analysis derived from the structures in rock mass could assess susceptibility to failure. The modeled rockfall source areas included more than 97% of the recorded events and the sediment budgets appeared to be correlated to the density of potential slope failure. This work showed that the analysis of process-related terrain morphology and of susceptibility to slope failure document the sediment dynamics to quantitatively assess erosion zones leading to debris flow activity. The development of erosional landforms was evaluated by analyzing their geometry with the orientations of potential rock slope failure and with the direction of the maximum joint frequency. Structure in rock mass, but in particular wedge failure and the dominant discontinuities, appear as a first-order control of erosional mechanisms affecting bedrock- dominated catchment. They represent some weaknesses that are exploited primarily by mass wasting processes and erosion, promoting not only the initiation of rock couloirs and gullies, but also their propagation. Incorporating the geological control in geomorphic processes contributes to better understand the landscape evolution of active catchments. A sediment flux algorithm was implemented in a sediment cascade model that discretizes the torrent catchment in channel reaches and individual process-response systems. Each conceptual element includes in simple manner geomorphological and sediment flux information derived from GIS complemented with field mapping. This tool enables to simulate sediment transfers in channels considering evolving debris supply and conveyance, and helps reducing the uncertainty inherent to sediment budget prediction in torrent systems. Cette thèse est un recueil de projets d'études des processus de recharges sédimentaires des chenaux torrentiels. Ces travaux, réalisés lorsque j'étais employé à l'Université de Lausanne, se concentrent sur les implications géologiques et morphologiques des bassins dans l'apport de sédiments, élément fondamental dans la prédiction de laves torrentielles. D'autres aspects de dynamique sédimentaire ont été abordés, p. ex. le couplage torrent - bassin, ainsi qu'un modèle de simulation du transfert sédimentaire en milieu torrentiel. L'activité sédimentaire du Manival, un système torrentiel actif des Alpes françaises, a été étudiée par relevés au laser scanner terrestre et complétée par une étude géostructurale ainsi qu'un suivi du transfert en sédiments du torrent. Une année de flux sédimentaire a pu être observée, coïncidant avec deux laves torrentielles et plusieurs phénomènes de charriages. Cette étude a révélé que les laves s'étaient générées dans le torrent et étaient précédées par une recharge de débris depuis les versants. La production de débris s'est passée principalement en l'hiver - début du printemps, causée par de grandes ruptures de pentes. Le transfert était plus étrange, se produisant presque exclusivement au début du printemps subordonné aux conditions d'écoulement et en automne lors de longues pluies. Les orages d'été n'affectèrent guère les dépôts, qui semblent dépendre de leur stabilité. Les implications morpho-géologiques dans l'apport sédimentaire ont été évaluées à l'aide de MNT et études de terrain. Une classification de la topographie basée sur la pente a permis de charactériser le mode de production et transfert. Une analyse de stabilité de pente à partir des structures de roches a permis d'estimer la susceptibilité à la rupture. Les zones sources modélisées comprennent plus de 97% des chutes de blocs observées et les bilans sédimentaires sont corrélés à la densité de ruptures potentielles. Ce travail d'analyses des morphologies du terrain et de susceptibilité à la rupture documente la dynamique sédimentaire pour l'estimation quantitative des zones érosives induisant l'activité torrentielle. Le développement des formes d'érosion a été évalué par l'analyse de leur géométrie avec celle des ruptures potentielles et avec la direction de la fréquence maximale des joints. Les structures de roches, mais en particulier les dièdres et les discontinuités dominantes, semblent être très influents dans les mécanismes d'érosion affectant les bassins rocheux. Ils représentent des zones de faiblesse exploitées en priorité par les processus de démantèlement et d'érosion, encourageant l'initiation de ravines et couloirs, mais aussi leur propagation. L'incorporation du control géologique dans les processus de surface contribue à une meilleure compréhension de l'évolution topographique de bassins actifs. Un algorithme de flux sédimentaire a été implémenté dans un modèle en cascade, lequel divise le bassin en biefs et en systèmes individuels répondant aux processus. Chaque unité inclut de façon simple les informations géomorpologiques et celles du flux sédimentaire dérivées à partir de SIG et de cartographie de terrain. Cet outil permet la simulation des transferts de masse dans les chenaux, considérants la variabilité de l'apport et son transport, et aide à réduire l'incertitude liée à la prédiction de bilans sédimentaires torrentiels. Ce travail vise très humblement d'éclairer quelques aspects de la dynamique sédimentaire en milieu torrentiel.
Resumo:
Current measures of ability emotional intelligence (EI)--including the well-known Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT)--suffer from several limitations, including low discriminant validity and questionable construct and incremental validity. We show that the MSCEIT is largely predicted by personality dimensions, general intelligence, and demographics having multiple R's with the MSCEIT branches up to .66; for the general EI factor this relation was even stronger (Multiple R = .76). As concerns the factor structure of the MSCEIT, we found support for four first-order factors, which had differential relations with personality, but no support for a higher-order global EI factor. We discuss implications for employing the MSCEIT, including (a) using the single branches scores rather than the total score, (b) always controlling for personality and general intelligence to ensure unbiased parameter estimates in the EI factors, and (c) correcting for measurement error. Failure to account for these methodological aspects may severely compromise predictive validity testing. We also discuss avenues for the improvement of ability-based tests.
Resumo:
We have explored the possibility of obtaining first-order permeability estimates for saturated alluvial sediments based on the poro-elastic interpretation of the P-wave velocity dispersion inferred from sonic logs. Modern sonic logging tools designed for environmental and engineering applications allow one for P-wave velocity measurements at multiple emitter frequencies over a bandwidth covering 5 to 10 octaves. Methodological considerations indicate that, for saturated unconsolidated sediments in the silt to sand range and typical emitter frequencies ranging from approximately 1 to 30 kHz, the observable velocity dispersion should be sufficiently pronounced to allow one for reliable first-order estimations of the permeability structure. The corresponding predictions have been tested on and verified for a borehole penetrating a typical surficial alluvial aquifer. In addition to multifrequency sonic logs, a comprehensive suite of nuclear and electrical logs, an S-wave log, a litholog, and a limited number laboratory measurements of the permeability from retrieved core material were also available. This complementary information was found to be essential for parameterizing the poro-elastic inversion procedure and for assessing the uncertainty and internal consistency of corresponding permeability estimates. Our results indicate that the thus obtained permeability estimates are largely consistent with those expected based on the corresponding granulometric characteristics, as well as with the available evidence form laboratory measurements. These findings are also consistent with evidence from ocean acoustics, which indicate that, over a frequency range of several orders-of-magnitude, the classical theory of poro-elasticity is generally capable of explaining the observed P-wave velocity dispersion in medium- to fine-grained seabed sediments
Resumo:
Objective: Bone cements and substitutes are commonly used in surgery to deliver antibiotics locally. The objective of this study was to assess the systemic absorption and disposition of vancomycin in patients treated with active calcium sulfate bone filler and to predict systemic concentrations under various conditions. Method: 277 blood samples were taken from 42 patients receiving vancomycin in bone cement during surgery. Blood samples were collected from 3h to 10 days after implantation. Vancomycin was measured by immunoenzymatic assay. Population pharmacokinetic (PK) analysis was performed using NONMEM to assess average estimates and variability of PK parameters. Based on the final model, simulations with various doses and renal function levels were performed. Results: The patients were 64 ± 20 years old, their body weight was 81 ± 22 kg and Cockcroft-Gault creatinine clearance (CLcr) 98 ± 55 mL/min. Vancomycin doses ranged from 200 mg to 6000 mg and implantation sites were hip (n=16), tibia (10) or others (16). Concentration profiles remained low and consistent with absorption rate-limited first-order release, while showing prominent variability. Mean clearance (CL) was 3.87 L/h (CV 35%), absorption rate constant (ka) 0.004 h-1 (66%) and volume of distribution (V) 9.5 L. Simulations with up to 8000 mg vancomycin implant showed systemic concentrations exceeding 20 mg/L for 3.5 days in 43% of the patients with CLcr 15 mL/min, whereas 7% of the patients with normal renal function had a concentration above 20 mg/L for 1.1 days. Subtherapeutic concentrations (0.4-4 mg/L) were predicted during a median of 22 days in patients with normal renal function and 4000 mg vancomycin implant, with limited influence of dose or renal function. Conclusion: Vancomycin-laden calcium sulfate implant does not raise toxicity concern. Selection of resistant bacteria, such as Enterococcus and Staphylococcus species, might however be a concern, as simulations show persistent subtherapeutic systemic concentrations during 3 to 4 weeks in these patients.
Resumo:
BACKGROUND: The aim of this study was to evaluate the efficacy of sustained release of vancomycin and teicoplanin from a resorbable gelatin glycerol sponge, in order to establish a new delivery system for local anti-infective therapy. MATERIALS AND METHODS: 60 plasticized glycerol gelatin sponges containing either 10 or 20% gelatin (w/v) were incubated in vancomycin or teicoplanin solution at 20 degrees C for either 1 or 24 h. In vitro release properties of the sponges were investigated over a period of 1 week by determining the levels of vancomycin and teicoplanin eluted in plasma using fluorescent polarization immunoassay. The rate constant and the half-life for the antibiotic release of each group were calculated by linear regression assuming first order kinetics. RESULTS: Presoaking for 24 h was associated with a significant increase in the total antibiotic release in all groups opposed to 1 h of incubation, except for the 10% sponges presoaked in teicoplanin. Doubling the gelatin content of the sponges from 10 to 20% significantly increased the total release of antibiotic load only in teicoplanin-containing sponges after 24 h incubation. In all corresponding groups investigated, release of vancomycin was more prolonged compared to teicoplanin, which allowed a gradual release beyond 5 days. The half-life (h +/- SEM) of both types of vancomycin-containing sponges was significantly prolonged by 24 h incubation in comparison to 1 h incubation (29.1 +/- 5.9 vs 5.9 +/- 1.0; p < 0.001, 30.0 +/- 2.1 vs 11.1 +/- 1.9; p < 0.001). However, neither doubling the gelatin content of the sponges nor a prolonged incubation was associated with a significantly prolonged delivery of teicoplanin. CONCLUSION: This study demonstrated a better diffusion-controlled release of vancomycin-impregnated glycerol gelatin sponges compared to those pretreated with teicoplanin. The plasticized glycerol gelatin sponge may be a promising carrier for the application of vancomycin to infected wounds for local anti-infective therapy.
Resumo:
AIM: Total imatinib concentrations are currently measured for the therapeutic drug monitoring of imatinib, whereas only free drug equilibrates with cells for pharmacological action. Due to technical and cost limitations, routine measurement of free concentrations is generally not performed. In this study, free and total imatinib concentrations were measured to establish a model allowing the confident prediction of imatinib free concentrations based on total concentrations and plasma proteins measurements. METHODS: One hundred and fifty total and free plasma concentrations of imatinib were measured in 49 patients with gastrointestinal stromal tumours. A population pharmacokinetic model was built up to characterize mean total and free concentrations with inter-patient and intrapatient variability, while taking into account α1 -acid glycoprotein (AGP) and human serum albumin (HSA) concentrations, in addition to other demographic and environmental covariates. RESULTS: A one compartment model with first order absorption was used to characterize total and free imatinib concentrations. Only AGP influenced imatinib total clearance. Imatinib free concentrations were best predicted using a non-linear binding model to AGP, with a dissociation constant Kd of 319 ng ml(-1) , assuming a 1:1 molar binding ratio. The addition of HSA in the equation did not improve the prediction of imatinib unbound concentrations. CONCLUSION: Although free concentration monitoring is probably more appropriate than total concentrations, it requires an additional ultrafiltration step and sensitive analytical technology, not always available in clinical laboratories. The model proposed might represent a convenient approach to estimate imatinib free concentrations. However, therapeutic ranges for free imatinib concentrations remain to be established before it enters into routine practice.
Resumo:
Surface geological mapping, laboratory measurements of rock properties, and seismic reflection data are integrated through three-dimensional seismic modeling to determine the likely cause of upper crustal reflections and to elucidate the deep structure of the Penninic Alps in eastern Switzerland. Results indicate that the principal upper crustal reflections recorded on the south end of Swiss seismic line NFP20-EAST can be explained by the subsurface geometry of stacked basement nappes. In addition, modeling results provide improvements to structural maps based solely on surface trends and suggest the presence of previously unrecognized rock units in the subsurface. Construction of the initial model is based upon extrapolation of plunging surface. structures; velocities and densities are established by laboratory measurements of corresponding rock units. Iterative modification produces a best fit model that refines the definition of the subsurface geometry of major structures. We conclude that most reflections from the upper 20 km can be ascribed to the presence of sedimentary cover rocks (especially carbonates) and ophiolites juxtaposed against crystalline basement nappes. Thus, in this area, reflections appear to be principally due to first-order lithologic contrasts. This study also demonstrates not only the importance of three-dimensional effects (sideswipe) in interpreting seismic data, but also that these effects can be considered quantitatively through three-dimensional modeling.
Resumo:
INTRODUCTION: Osteoset(®) T is a calcium sulphate void filler containing 4% tobramycin sulphate, used to treat bone and soft tissue infections. Despite systemic exposure to the antibiotic, there are no pharmacokinetic studies in humans published so far. Based on the observations made in our patients, a model predicting tobramycin serum levels and evaluating their toxicity potential is presented. METHODS: Following implantation of Osteoset(®) T, tobramycin serum concentrations were monitored systematically. A pharmacokinetic analysis was performed using a non-linear mixed effects model based on a one compartment model with first-degree absorption. RESULTS: Data from 12 patients treated between October 2006 and March 2008 were analysed. Concentration profiles were consistent with the first-order slow release and single-compartment kinetics, whilst showing important variability. Predicted tobramycin serum concentrations depended clearly on both implanted drug amount and renal function. DISCUSSION AND CONCLUSION: Despite the popularity of aminoglycosides for local antibiotic therapy, pharmacokinetic data for this indication are scarce, and not available for calcium sulphate as carrier material. Systemic exposure to tobramycin after implantation of Osteoset(®) T appears reassuring regarding toxicity potential, except in case of markedly impaired renal function. We recommend in adapting the dosage to the estimated creatinine clearance rather than solely to the patient's weight.
Resumo:
RESUME : Valganciclovir (Valcyte®) is an orally administered ester prodrug of the standard anticytomegalovirus (CMV) drug ganciclovir. This drug enabled an important reduction of the burden of CMV morbidity and mortality in solid organ transplant recipients. Prevention of CMV infection and treatment of CMV disease requires drug administration during many weeks. Oral drug administration is therefore convenient. Valganciclovir has been developed to overcome the poor oral availability of ganciclovir, which limits its concentration exposure after oral administration and thus its efficacy. This prodrug crosses efficiently the intestinal barrier, is then hydrolyzed into ganciclovir, providing exposure similar to intravenous ganciclovir. Valganciclovir is now preferred for the prophylaxis and treatment of CMV infection in solid organ transplant recipients. Nevertheless, adequate dosage adjustment is necessary to optimize its use, avoiding either insufficient or exaggerate exposure related to differences in its pharmacokinetic profile between patients. The main goal of this thesis was to better describe the pharmacokinetic and pharmacodynamic profile of valganciclovir in solid organ transplant recipients, to assess their reproducibility and their predictability, and thus to evaluate the current recommendations for valganciclovir dosage adjustment and the potential contribution of routine therapeutic drug monitoring (TDM) to patients' management. A total of 437 ganciclovir plasma concentration data from 65 transplant patients (41 kidney, 12 lung, 10 heart and 2 liver recipients, 58 under oral valganciclovir prophylaxis, 8 under oral valganciclovir treatment and 2 under intravenous ganciclovir) were measured using a validated chromatographic method (HPLC) developed for this study. The results were analyzed by non-linear mixed effect modeling (NONMEM). A two-compartment model with first-order absorption appropriately described the data. Systemic clearance was markedly influenced by GFR, with further differences between graft types and sex (CL/GFR = 1.7 in kidney, 0.9 in heart and 1.2 in lung and liver recipients) with interpatient variability (CV%) of 26% and interoccasion variability of 12%. Body weight and sex influenced central volume of distribution (V1 = 0.34 l/kg in males and 0.27 l/kg in females) with an interpatient variability of 20%. Residual intrapatient variability was 21 %. No significant drug interaction influenced GCV disposition. VGC prophylactic efficacy and tolerability were good, without detectable dependence on GCV profile. In conclusion, this analysis highlights the importance of thorough adjustment of VGC dosage to renal function and body weight. Considering the good predictability and reproducibility of GCV profile after oral VGC in solid organ transplant recipients, routine TDM does not appear to be clinically indicated. However, GCV plasma measurement may still be helpful in specific clinical situations such as documentation of appropriate exposure in patients with potentially compromised absorption, or lack of response to CMV disease treatment, or under renal replacement therapy. RESUME : Le valganciclovir (Valcyte®) est un promédicament oral du ganciclovir qui est un anti-infectieux de référence contre les infections à cytomegalovirus (CMV). Cet antiviral a permis de réduire les effets délétères de cette infection jusqu'ici responsable d'une importante morbidité et mortalité chez les transplantés d'organe. La prévention et le traitement de l'infection à CMV sont donc nécessaires mais requièrent l'administration d'un agent antiviral sur une longue période. Un médicament administré par voie orale représente donc un avantage évident. Le valganciclovir a été développé dans le but d'améliorer la faible absorption orale du ganciclovir, et donc son efficacité. Cet ester valylique du ganciclovir traverse plus facilement la barrière gastro-intestinale, puis est hydrolysé en ganciclovir dans la circulation sanguine, produisant une exposition comparable à celle d'une perfusion intraveineuse de ganciclovir. De ce fait, le valganciclovir est devenu largement utilisé pour la prophylaxie mais aussi le traitement de l'infection à CMV. Néanmoins une utilisation optimale de ce nouveau médicament nécessite de bonnes connaissances sur son profil pharmacocinétique afin d'établir un schéma de dose adapté pour éviter tant une surexposition qu'une sous-exposition résultant des différences d'élimination entre les patients. Le but de cette thèse a été d'étudier le profil pharmacocinétique et pharmacodynamique du valganciclovir chez les transplantés d'organe ainsi que sa reproductibilité et sa prédictibilité. Il s'agissait d'apprécier de manière critique le schéma actuellement recommandé pour l'adaptation des doses de valganciclovir, mais aussi la contribution éventuelle d'un suivi des concentrations sanguines en routine. Un total de 437 taux sanguins de ganciclovir ont été mesurés, provenant de 65 patients transplantés d'organe (41 rénaux, 12 pulmonaires, 10 cardiaques et 2 hépatiques, 58 sous une prophylaxie orale de valganciclovir, 8 sous un traitement de valganciclovir et 2 sous un traitement intraveineux). Une méthode de chromatographie liquide à haute performance a été développée et validée pour cette étude. Les résultats ont été ensuite analysés par modélisation non linéaire à effets mixtes (NONMEM). Un modèle à deux compartiments avec absorption de premier ordre a permis de décrire les données. La clairance systémique était principalement influencée par le débit de filtration glomérulaire (GFR), avec une différence entre les types de greffe et les sexes (CL/GFR = 1.7 chez les greffés rénaux, 0.9 pour les greffés cardiaques et 1.2 pour le groupe des greffés pulmonaires et hépatiques) avec un variabilité inter-individuelle de 26% (CV%) et une variabilité inter-occasion de 12%. Le poids corporel ainsi que le sexe avaient une influence sur le volume central de distribution (V1 = 0.34 l/kg chez les hommes et 0.27 l/kg chez les femmes) avec une variabilité inter-individuelle de 20%. La variabilité intra-individuelle résiduelle était de 21 %. Aucune interaction médicamenteuse n'a montré d'influence sur le profil du ganciclovir. La prophylaxie avec le valganciclovir s'est révélée efficace et bien tolérée. En conclusion, cette analyse souligne l'importance d'une adaptation de la dose du valganciclovir à la fonction rénale et au poids du patient. Au vu de la bonne reproductibilité et prédictibilité du profil pharmacocinétique du ganciclovir chez les patients transplantés recevant du valganciclovir, un suivi des concentrations sanguines en routine ne semble pas cliniquement indiqué. Néanmoins, la mesure des taux plasmatiques de ganciclovir peut être utile dans certaines situations particulières, comme la vérification d'une exposition appropriée chez des patients susceptibles d'absorption insuffisante, ou ne répondant pas au traitement d'une infection à CMV ou encore sous épuration extra-rénale. RESUME LARGE PUBLIC : Le valganciclovir est un précurseur capable de libérer du ganciclovir, récemment développé pour améliorer la faible absorption orale de ce dernier. Une fois le valganciclovir absorbé, le ganciclovir libéré dans la circulation sanguine devient efficace contre les infections à cytomégalovirus. Ce virus largement répandu est responsable de maladies insidieuses et parfois graves chez les personnes présentant une baisse des défenses immunitaires, comme les greffés d'organe recevant un traitement anti-rejet. Le ganciclovir est administré pendant plusieurs mois consécutifs soit pour prévenir une infection après la transplantation, soit pour traiter une infection déclarée. La facilité d'administration du valganciclovir par voie orale représente un avantage sur une administration du ganciclovir par perfusion, qui nécessite une hospitalisation. Toutefois, la voie orale peut être une source supplémentaire de variabilité chez les patients, avec un impact potentiel sur l'efficacité ou la toxicité du médicament. Le but de cette étude a été - de décrire le devenir de ce médicament dans le corps humain (dont l'étude relève de la discipline de la pharmacocinétique) - de définir les facteurs cliniques pouvant expliquer les différences de concentration sanguine observées entre les patients sous une posologie donnée - d'explorer les relations entre les concentrations du médicament dans le sang et son efficacité ou la survenue d'effets indésirables (dont l'étude relève de la discipline de la pharmacodynamie). Cette étude a nécessité le développement et la validation, d'une méthode d'analyse pour mesurer la concentration sanguine du ganciclovir, puis son application à 437 échantillons provenant de 65 patients transplantés d'organe solide (41 rénaux, 12 pulmonaires, 10 cardiaques et 2 hépatiques) recevant du valganciclovir. Les résultats des mesures effectuées ont été analysés à l'aide d'un outil mathématique afin d'élaborer un modèle du devenir du médicament dans le sang chez chaque patient et à chaque occasion. Cette étude a permis d'évaluer chez des patients recevant le valganciclovir, la vitesse à laquelle l'organisme absorbe, distribue, puis élimine le médicament. La vitesse d'élimination dépendait étroitement de la fonction rénale, du type de greffe et du sexe alors que la distribution dépendait du poids et du sexe du patient. La variabilité non expliquée par ces facteurs cliniques était modérée et vraisemblablement sans conséquence clinique évidente soit sur l'efficacité ou la tolérance, qui se révèlent très satisfaisantes chez les patients de l'étude. Les observations n'ont pas révélé de relation entre les concentrations de médicament et l'efficacité thérapeutique ou la survenue d'effets indésirables, confirmant que les doses relativement faibles utilisées dans notre collectif de patients suffisaient à produire une exposition reproductible à des concentrations adéquates. En conclusion, le profil (et par conséquent l'absorption) du valganciclovir chez les patients transplantés semble bien prédictible après une adaptation de la dose à la fonction rénale et au poids du patient. Un contrôle systématique des concentrations sanguines n'est probablement pas indiqué en routine, mais cette mesure peut présenter un intérêt dans certaines conditions particulières.
Resumo:
The goal of this study was to investigate the impact of computing parameters and the location of volumes of interest (VOI) on the calculation of 3D noise power spectrum (NPS) in order to determine an optimal set of computing parameters and propose a robust method for evaluating the noise properties of imaging systems. Noise stationarity in noise volumes acquired with a water phantom on a 128-MDCT and a 320-MDCT scanner were analyzed in the spatial domain in order to define locally stationary VOIs. The influence of the computing parameters in the 3D NPS measurement: the sampling distances bx,y,z and the VOI lengths Lx,y,z, the number of VOIs NVOI and the structured noise were investigated to minimize measurement errors. The effect of the VOI locations on the NPS was also investigated. Results showed that the noise (standard deviation) varies more in the r-direction (phantom radius) than z-direction plane. A 25 × 25 × 40 mm(3) VOI associated with DFOV = 200 mm (Lx,y,z = 64, bx,y = 0.391 mm with 512 × 512 matrix) and a first-order detrending method to reduce structured noise led to an accurate NPS estimation. NPS estimated from off centered small VOIs had a directional dependency contrary to NPS obtained from large VOIs located in the center of the volume or from small VOIs located on a concentric circle. This showed that the VOI size and location play a major role in the determination of NPS when images are not stationary. This study emphasizes the need for consistent measurement methods to assess and compare image quality in CT.