57 resultados para first order transition system


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: The interhemispheric asymmetries that originate from connectivity-related structuring of the cerebral cortex are compromised in schizophrenia (SZ). Recently, we have revealed the whole-head topography of EEG synchronization in SZ (Jalili et al. 2007; Knyazeva et al. 2008). Here we extended the analysis to assess the abnormality in the asymmetry of synchronization, which is further motivated by the evidence that the interhemispheric asymmetries suspected to be abnormal in SZ originate from the connectivity-related structuring of the cortex. Methods: Thirteen right-handed SZ patients and thirteen matched controls, participated in this study and the multichannel (128) EEGs were recorded for 3-5 minutes at rest. Then, Laplacian EEG (LEEG) were calculated using a 2-D spline. The LEEGs were analysis through calculating the power spectral density using Welch's average periodogram method. Furthermore, using a state-space based multivariate synchronization measure, S-estimator, we analyzed the correlate of the functional cortico-cortical connectivity in SZ patients compared to the controls. The values of S-estimator were obtained at three different special scales: first-order neighbors for each sensor location, second-order neighbors, and the whole hemisphere. The synchronization measures based on LEEG of alpha and beta bands were applied and tuned to various spatial scales including local, intraregional, and long-distance levels. To assess the between-group differences, we used a permutation version of Hotelling's T2 test. For correlation analysis, Spearman Rank Correlation was calculated. Results: Compared to the controls, who had rightward asymmetry at a local level (LEEG power), rightward anterior and leftward posterior asymmetries at an intraregional level (first- and second-order S-estimator), and rightward global asymmetry (hemispheric S-estimator), SZ patients showed generally attenuated asymmetry, the effect being strongest for intraregional synchronization. This deviation in asymmetry across the anterior-to-posterior axis is consistent with the cerebral form of the so-called Yakovlevian or anticlockwise cerebral torque. Moreover, the negative occipital and positive frontal asymmetry values suggest higher regional synchronization among the left occipital and the right frontal locations relative to their symmetrical counterparts. Correlation analysis linked the posterior intraregional and hemispheric abnormalities to the negative SZ symptoms, whereas the asymmetry of LEEG power appeared to be weakly coupled to clinical ratings. The posterior intraregional abnormalities of asymmetry were shown to increase with the duration of the disease. The tentative links between these findings and gross anatomical asymmetries, including the cerebral torque and gyrification pattern in normal subjects and SZ patients, are discussed. Conclusions: Overall, our findings reveal the abnormalities in the synchronization asymmetry in SZ patients and heavy involvement of the right hemisphere in these abnormalities. These results indicate that anomalous asymmetry of cortico-cortical connections in schizophrenia is amenable to electrophysiological analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objectives of this study were to characterize raltegravir (RAL) population pharmacokinetics in HIV-positive (HIV(+)) and healthy individuals, identify influential factors, and search for new candidate genes involved in UDP glucuronosyltransferase (UGT)-mediated glucuronidation. The pharmacokinetic analysis was performed with NONMEM. Genetic association analysis was performed with PLINK using the relative bioavailability as the phenotype. Simulations were performed to compare once- and twice-daily regimens. A 2-compartment model with first-order absorption adequately described the data. Atazanavir, gender, and bilirubin levels influenced RAL relative bioavailability, which was 30% lower in HIV(+) than in healthy individuals. UGT1A9*3 was the only genetic variant possibly influencing RAL pharmacokinetics. The majority of RAL pharmacokinetic variability remains unexplained by genetic and nongenetic factors. Owing to the very large variability, trough drug levels might be very low under the standard dosing regimen, raising the question of a potential relevance of therapeutic drug monitoring of RAL in some situations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIMS: The aims of this observational study were to assess the variability in imatinib pharmacokinetics and to explore the relationship between its disposition and various biological covariates, especially plasma alpha1-acid glycoprotein concentrations. METHODS: A population pharmacokinetic analysis was performed using NONMEM based on 321 plasma samples from 59 patients with either chronic myeloid leukaemia or gastrointestinal stromal tumours. The influence of covariates on oral clearance and volume of distribution was examined. Furthermore, the in vivo intracellular pharmacokinetics of imatinib was explored in five patients. RESULTS: A one-compartment model with first-order absorption appropriately described the data, giving a mean (+/-SEM) oral clearance of 14.3 l h-1 (+/-1.0) and a volume of distribution of 347 l (+/-62). Oral clearance was influenced by body weight, age, sex and disease diagnosis. A large proportion of the interindividual variability (36% of clearance and 63% of volume of distribution) remained unexplained by these demographic covariates. Plasma alpha1-acid glycoprotein concentrations had a marked influence on total imatinib concentrations. Moreover, we observed an intra/extracellular ratio of 8, suggesting substantial uptake of the drug into the target cells. CONCLUSION: Because of the high pharmacokinetic variability of imatinib and the reported relationships between its plasma concentration and efficacy and toxicity, the usefulness of therapeutic drug monitoring as an aid to optimizing therapy should be further investigated. Ideally, such an approach should take account of either circulating alpha1-acid glycoprotein concentrations or free imatinib concentrations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Imatinib has revolutionized the treatment of chronic myeloid leukemia (CML) and gastrointestinal stromal tumors (GIST). Considering the large inter-individual differences in the function of the systems involved in its disposition, exposure to imatinib can be expected to vary widely among patients. This observational study aimed at describing imatinib pharmacokinetic variability and its relationship with various biological covariates, especially plasma alpha1-acid glycoprotein (AGP), and at exploring the concentration-response relationship in patients. Methods: A population pharmacokinetic model (NONMEM) including 321 plasma samples from 59 patients was built up and used to derive individual post-hoc Bayesian estimates of drug exposure (AUC; area under curve). Associations between AUC and therapeutic response or tolerability were explored by ordered logistic regression. Influence of the target genotype (i.e. KIT mutation profile) on response was also assessed in GIST patients. Results: A one-compartment model with first-order absorption appropriately described the data, with an average oral clearance of 14.3 L/h (CL) and volume of distribution of 347 L (Vd). A large inter-individual variability remained unexplained, both on CL (36%) and Vd (63%), but AGP levels proved to have a marked impact on total imatinib disposition. Moreover, both total and free AUC correlated with the occurrence and number of side effects (e.g. OR 2.9±0.6 for a 2-fold free AUC increase; p<0.001). Furthermore, in GIST patients, higher free AUC predicted a higher probability of therapeutic response (OR 1.9±0.5; p<0.05), notably in patients with tumor harboring an exon 9 mutation or wild-type KIT, known to decrease tumor sensitivity towards imatinib. Conclusion: The large pharmacokinetic variability, associated to the pharmacokinetic-pharmacodynamic relationship uncovered are arguments to further investigate the usefulness of individualizing imatinib prescription based on TDM. For this type of drug, it should ideally take into consideration either circulating AGP concentrations or free drug levels, as well as KIT genotype for GIST.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Valganciclovir (VGC) is an oral prodrug of ganciclovir (GCV) recently introduced for prophylaxis and treatment of cytomegalovirus infection. Optimal concentration exposure for effective and safe VGC therapy would require either reproducible VGC absorption and GCV disposition or dosage adjustment based on therapeutic drug monitoring (TDM). We examined GCV population pharmacokinetics in solid organ transplant recipients receiving oral VGC, including the influence of clinical factors, the magnitude of variability, and its impact on efficacy and tolerability. Nonlinear mixed effect model (NONMEM) analysis was performed on plasma samples from 65 transplant recipients under VGC prophylaxis or treatment. A two-compartment model with first-order absorption appropriately described the data. Systemic clearance was markedly influenced by the glomerular filtration rate (GFR), patient gender, and graft type (clearance/GFR = 1.7 in kidney, 0.9 in heart, and 1.2 in lung and liver recipients) with interpatient and interoccasion variabilities of 26 and 12%, respectively. Body weight and sex influenced central volume of distribution (V(1) = 0.34 liter/kg in males and 0.27 liter/kg in females [20% interpatient variability]). No significant drug interaction was detected. The good prophylactic efficacy and tolerability of VGC precluded the demonstration of any relationship with GCV concentrations. In conclusion, this analysis highlights the importance of thorough adjustment of VGC dosage to renal function and body weight. Considering the good predictability and reproducibility of the GCV profile after treatment with oral VGC, routine TDM does not appear to be clinically indicated in solid-organ transplant recipients. However, GCV plasma measurement may still be helpful in specific clinical situations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: We developed a population model that describes the ocular penetration and pharmacokinetics of penciclovir in human aqueous humour and plasma after oral administration of famciclovir. METHODS: Fifty-three patients undergoing cataract surgery received a single oral dose of 500 mg of famciclovir prior to surgery. Concentrations of penciclovir in both plasma and aqueous humour were measured by HPLC with fluorescence detection. Concentrations in plasma and aqueous humour were fitted using a two-compartment model (NONMEM software). Inter-individual and intra-individual variabilities were quantified and the influence of demographics and physiopathological and environmental variables on penciclovir pharmacokinetics was explored. RESULTS: Drug concentrations were fitted using a two-compartment, open model with first-order transfer rates between plasma and aqueous humour compartments. Among tested covariates, creatinine clearance, co-intake of angiotensin-converting enzyme inhibitors and body weight significantly influenced penciclovir pharmacokinetics. Plasma clearance was 22.8 ± 9.1 L/h and clearance from the aqueous humour was 8.2 × 10(-5) L/h. AUCs were 25.4 ± 10.2 and 6.6 ± 1.8 μg · h/mL in plasma and aqueous humour, respectively, yielding a penetration ratio of 0.28 ± 0.06. Simulated concentrations in the aqueous humour after administration of 500 mg of famciclovir three times daily were in the range of values required for 50% growth inhibition of non-resistant strains of the herpes zoster virus family. CONCLUSIONS: Plasma and aqueous penciclovir concentrations showed significant variability that could only be partially explained by renal function, body weight and comedication. Concentrations in the aqueous humour were much lower than in plasma, suggesting that factors in the blood-aqueous humour barrier might prevent its ocular penetration or that redistribution occurs in other ocular compartments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Maintaining therapeutic concentrations of drugs with a narrow therapeutic window is a complex task. Several computer systems have been designed to help doctors determine optimum drug dosage. Significant improvements in health care could be achieved if computer advice improved health outcomes and could be implemented in routine practice in a cost effective fashion. This is an updated version of an earlier Cochrane systematic review, by Walton et al, published in 2001. OBJECTIVES: To assess whether computerised advice on drug dosage has beneficial effects on the process or outcome of health care. SEARCH STRATEGY: We searched the Cochrane Effective Practice and Organisation of Care Group specialized register (June 1996 to December 2006), MEDLINE (1966 to December 2006), EMBASE (1980 to December 2006), hand searched the journal Therapeutic Drug Monitoring (1979 to March 2007) and the Journal of the American Medical Informatics Association (1996 to March 2007) as well as reference lists from primary articles. SELECTION CRITERIA: Randomized controlled trials, controlled trials, controlled before and after studies and interrupted time series analyses of computerized advice on drug dosage were included. The participants were health professionals responsible for patient care. The outcomes were: any objectively measured change in the behaviour of the health care provider (such as changes in the dose of drug used); any change in the health of patients resulting from computerized advice (such as adverse reactions to drugs). DATA COLLECTION AND ANALYSIS: Two reviewers independently extracted data and assessed study quality. MAIN RESULTS: Twenty-six comparisons (23 articles) were included (as compared to fifteen comparisons in the original review) including a wide range of drugs in inpatient and outpatient settings. Interventions usually targeted doctors although some studies attempted to influence prescriptions by pharmacists and nurses. Although all studies used reliable outcome measures, their quality was generally low. Computerized advice for drug dosage gave significant benefits by:1.increasing the initial dose (standardised mean difference 1.12, 95% CI 0.33 to 1.92)2.increasing serum concentrations (standradised mean difference 1.12, 95% CI 0.43 to 1.82)3.reducing the time to therapeutic stabilisation (standardised mean difference -0.55, 95%CI -1.03 to -0.08)4.reducing the risk of toxic drug level (rate ratio 0.45, 95% CI 0.30 to 0.70)5.reducing the length of hospital stay (standardised mean difference -0.35, 95% CI -0.52 to -0.17). AUTHORS' CONCLUSIONS: This review suggests that computerized advice for drug dosage has some benefits: it increased the initial dose of drug, increased serum drug concentrations and led to a more rapid therapeutic control. It also reduced the risk of toxic drug levels and the length of time spent in the hospital. However, it had no effect on adverse reactions. In addition, there was no evidence to suggest that some decision support technical features (such as its integration into a computer physician order entry system) or aspects of organization of care (such as the setting) could optimise the effect of computerised advice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The large spatial inhomogeneity in transmit B(1) field (B(1)(+)) observable in human MR images at high static magnetic fields (B(0)) severely impairs image quality. To overcome this effect in brain T(1)-weighted images, the MPRAGE sequence was modified to generate two different images at different inversion times, MP2RAGE. By combining the two images in a novel fashion, it was possible to create T(1)-weighted images where the result image was free of proton density contrast, T(2) contrast, reception bias field, and, to first order, transmit field inhomogeneity. MP2RAGE sequence parameters were optimized using Bloch equations to maximize contrast-to-noise ratio per unit of time between brain tissues and minimize the effect of B(1)(+) variations through space. Images of high anatomical quality and excellent brain tissue differentiation suitable for applications such as segmentation and voxel-based morphometry were obtained at 3 and 7 T. From such T(1)-weighted images, acquired within 12 min, high-resolution 3D T(1) maps were routinely calculated at 7 T with sub-millimeter voxel resolution (0.65-0.85 mm isotropic). T(1) maps were validated in phantom experiments. In humans, the T(1) values obtained at 7 T were 1.15+/-0.06 s for white matter (WM) and 1.92+/-0.16 s for grey matter (GM), in good agreement with literature values obtained at lower spatial resolution. At 3 T, where whole-brain acquisitions with 1 mm isotropic voxels were acquired in 8 min, the T(1) values obtained (0.81+/-0.03 s for WM and 1.35+/-0.05 for GM) were once again found to be in very good agreement with values in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To study different temporal components on cancer mortality (age, period and cohort) methods of graphic representation were applied to Swiss mortality data from 1950 to 1984. Maps using continuous slopes ("contour maps") and based on eight tones of grey according to the absolute distribution of rates were used to represent the surfaces defined by the matrix of various age-specific rates. Further, progressively more complex regression surface equations were defined, on the basis of two independent variables (age/cohort) and a dependent one (each age-specific mortality rate). General patterns of trends in cancer mortality were thus identified, permitting definition of important cohort (e.g., upwards for lung and other tobacco-related neoplasms, or downwards for stomach) or period (e.g., downwards for intestines or thyroid cancers) effects, besides the major underlying age component. For most cancer sites, even the lower order (1st to 3rd) models utilised provided excellent fitting, allowing immediate identification of the residuals (e.g., high or low mortality points) as well as estimates of first-order interactions between the three factors, although the parameters of the main effects remained still undetermined. Thus, the method should be essentially used as summary guide to illustrate and understand the general patterns of age, period and cohort effects in (cancer) mortality, although they cannot conceptually solve the inherent problem of identifiability of the three components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Bone graft substitute such as calcium sulfate are frequently used as carrier material for local antimicrobial therapy in orthopedic surgery. This study aimed to assess the systemic absorption and disposition of tobramycin in patients treated with a tobramycin-laden bone graft substitute (Osteoset® T). METHODS: Nine blood samples were taken from 12 patients over 10 days after Osteoset® T surgical implantation. Tobramycin concentration was measured by fluorescence polarization. Population pharmacokinetic analysis was performed using NONMEM to assess the average value and variability (CV) of pharmacokinetic parameters. Bioavailability (F) was assessed by equating clearance (CL) with creatinine clearance (Cockcroft CLCr). Based on the final model, simulations with various doses and renal function levels were performed. (ClinicalTrials.gov number, NCT01938417). RESULTS: The patients were 52 +/- 20 years old, their mean body weight was 73 +/- 17 kg and their mean CLCr was 119 +/- 55 mL/min. Either 10 g or 20 g Osteoset® T with 4% tobramycin sulfate was implanted in various sites. Concentration profiles remained low and consistent with absorption rate-limited first-order release, while showing important variability. With CL equated to CLCr, mean absorption rate constant (ka) was 0.06 h-1, F was 63% or 32% (CV 74%) for 10 and 20 g Osteoset® T respectively, and volume of distribution (V) was 16.6 L (CV 89%). Simulations predicted sustained high, potentially toxic concentrations with 10 g, 30 g and 50 g Osteoset® T for CLCr values below 10, 20 and 30 mL/min, respectively. CONCLUSIONS: Osteoset® T does not raise toxicity concerns in subjects without significant renal failure. The risk/benefit ratio might turn unfavorable in case of severe renal failure, even after standard dose implantation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Aminoglycosides are mandatory in the treatment of severe infections in burns. However, their pharmacokinetics are difficult to predict in critically ill patients. Our objective was to describe the pharmacokinetic parameters of high doses of tobramycin administered at extended intervals in severely burned patients. METHODS: We prospectively enrolled 23 burned patients receiving tobramycin in combination therapy for Pseudomonas species infections in a burn ICU over 2 years in a therapeutic drug monitoring program. Trough and post peak tobramycin levels were measured to adjust drug dosage. Pharmacokinetic parameters were derived from two points first order kinetics. RESULTS: Tobramycin peak concentration was 7.4 (3.1-19.6)microg/ml and Cmax/MIC ratio 14.8 (2.8-39.2). Half-life was 6.9 (range 1.8-24.6)h with a distribution volume of 0.4 (0.2-1.0)l/kg. Clearance was 35 (14-121)ml/min and was weakly but significantly correlated with creatinine clearance. CONCLUSION: Tobramycin had a normal clearance, but an increased volume of distribution and a prolonged half-life in burned patients. However, the pharmacokinetic parameters of tobramycin are highly variable in burned patients. These data support extended interval administration and strongly suggest that aminoglycosides should only be used within a structured pharmacokinetic monitoring program.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current measures of ability emotional intelligence (EI)--including the well-known Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT)--suffer from several limitations, including low discriminant validity and questionable construct and incremental validity. We show that the MSCEIT is largely predicted by personality dimensions, general intelligence, and demographics having multiple R's with the MSCEIT branches up to .66; for the general EI factor this relation was even stronger (Multiple R = .76). As concerns the factor structure of the MSCEIT, we found support for four first-order factors, which had differential relations with personality, but no support for a higher-order global EI factor. We discuss implications for employing the MSCEIT, including (a) using the single branches scores rather than the total score, (b) always controlling for personality and general intelligence to ensure unbiased parameter estimates in the EI factors, and (c) correcting for measurement error. Failure to account for these methodological aspects may severely compromise predictive validity testing. We also discuss avenues for the improvement of ability-based tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have explored the possibility of obtaining first-order permeability estimates for saturated alluvial sediments based on the poro-elastic interpretation of the P-wave velocity dispersion inferred from sonic logs. Modern sonic logging tools designed for environmental and engineering applications allow one for P-wave velocity measurements at multiple emitter frequencies over a bandwidth covering 5 to 10 octaves. Methodological considerations indicate that, for saturated unconsolidated sediments in the silt to sand range and typical emitter frequencies ranging from approximately 1 to 30 kHz, the observable velocity dispersion should be sufficiently pronounced to allow one for reliable first-order estimations of the permeability structure. The corresponding predictions have been tested on and verified for a borehole penetrating a typical surficial alluvial aquifer. In addition to multifrequency sonic logs, a comprehensive suite of nuclear and electrical logs, an S-wave log, a litholog, and a limited number laboratory measurements of the permeability from retrieved core material were also available. This complementary information was found to be essential for parameterizing the poro-elastic inversion procedure and for assessing the uncertainty and internal consistency of corresponding permeability estimates. Our results indicate that the thus obtained permeability estimates are largely consistent with those expected based on the corresponding granulometric characteristics, as well as with the available evidence form laboratory measurements. These findings are also consistent with evidence from ocean acoustics, which indicate that, over a frequency range of several orders-of-magnitude, the classical theory of poro-elasticity is generally capable of explaining the observed P-wave velocity dispersion in medium- to fine-grained seabed sediments