34 resultados para semi-empirical methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: In low-mortality countries, life expectancy is increasing steadily. This increase can be disentangled into two separate components: the delayed incidence of death (i.e. the rectangularization of the survival curve) and the shift of maximal age at death to the right (i.e. the extension of longevity). METHODS: We studied the secular increase of life expectancy at age 50 in nine European countries between 1922 and 2006. The respective contributions of rectangularization and longevity to increasing life expectancy are quantified with a specific tool. RESULTS: For men, an acceleration of rectangularization was observed in the 1980s in all nine countries, whereas a deceleration occurred among women in six countries in the 1960s. These diverging trends are likely to reflect the gender-specific trends in smoking. As for longevity, the extension was steady from 1922 in both genders in almost all countries. The gain of years due to longevity extension exceeded the gain due to rectangularization. This predominance over rectangularization was still observed during the most recent decades. CONCLUSIONS: Disentangling life expectancy into components offers new insights into the underlying mechanisms and possible determinants. Rectangularization mainly reflects the secular changes of the known determinants of early mortality, including smoking. Explaining the increase of maximal age at death is a more complex challenge. It might be related to slow and lifelong changes in the socio-economic environment and lifestyles as well as population composition. The still increasing longevity does not suggest that we are approaching any upper limit of human longevity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"Most quantitative empirical analyses are motivated by the desire to estimate the causal effect of an independent variable on a dependent variable. Although the randomized experiment is the most powerful design for this task, in most social science research done outside of psychology, experimental designs are infeasible. (Winship & Morgan, 1999, p. 659)." This quote from earlier work by Winship and Morgan, which was instrumental in setting the groundwork for their book, captures the essence of our review of Morgan and Winship's book: It is about causality in nonexperimental settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show how nonlinear embedding algorithms popular for use with shallow semi-supervised learning techniques such as kernel methods can be applied to deep multilayer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Five years after the 2005 Pakistan earthquake that triggered multiple mass movements, landslides continue to pose a threat to the population of Azad Kashmir, especially during heavy monsoon rains. The thousands of landslides that were triggered by the 7.6 magnitude earthquake in 2005 were not just due to a natural phenomenon but largely induced by human activities, namely, road building, grazing, and deforestation. The damage caused by the landslides in the study area (381 km2) is estimated at 3.6 times the annual public works budget of Azad Kashmir for 2005 of US$ 1 million. In addition to human suffering, this cost constitutes a significant economic setback to the region that could have been reduced through improved land use and risk management. This article describes interdisciplinary research conducted 18 months after the earthquake to provide a more systemic approach to understanding risks posed by landslides, including the physical, environmental, and human contexts. The goal of this research is twofold: to present empirical data on the social, geological, and environmental contexts in which widespread landslides occurred following the 2005 earthquake; and, second, to describe straightforward methods that can be used for integrated landslide risk assessments in data-poor environments. The article analyzes limitations of the methodologies and challenges for conducting interdisciplinary research that integrates both social and physical data. This research concludes that reducing landslide risk is ultimately a management issue, based in land use decisions and governance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: This trial was conducted to evaluate the safety and immunogenicity of two virosome formulated malaria peptidomimetics derived from Plasmodium falciparum AMA-1 and CSP in malaria semi-immune adults and children.Methods: The design was a prospective randomized, double-blind, controlled, age-deescalating study with two immunizations. 10 adults and 40 children (aged 5-9 years) living in a malaria endemic area were immunized with PEV3B or virosomal influenza vaccine Inflexal (R) V on day 0 and 90.Results: No serious or severe adverse events (AEs) related to the vaccines were observed. The only local solicited AE reported was pain at injection site, which affected more children in the Inflexal (R) V group compared to the PEV3B group (p = 0.014). In the PEV3B group, IgG ELISA endpoint titers specific for the AMA-1 and CSP peptide antigens were significantly higher for most time points compared to the Inflexal (R) V control group. Across all time points after first immunization the average ratio of endpoint titers to baseline values in PEV3B subjects ranged from 4 to 15 in adults and from 4 to 66 in children. As an exploratory outcome, we found that the incidence rate of clinical malaria episodes in children vaccinees was half the rate of the control children between study days 30 and 365 (0.0035 episodes per day at risk for PEV3B vs. 0.0069 for Inflexal (R) V; RR = 0.50 [95%-CI: 0.29-0.88], p = 0.02).Conclusion: These findings provide a strong basis for the further development of multivalent virosomal malaria peptide vaccines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic environmental monitoring networks enforced by wireless communication technologies provide large and ever increasing volumes of data nowadays. The use of this information in natural hazard research is an important issue. Particularly useful for risk assessment and decision making are the spatial maps of hazard-related parameters produced from point observations and available auxiliary information. The purpose of this article is to present and explore the appropriate tools to process large amounts of available data and produce predictions at fine spatial scales. These are the algorithms of machine learning, which are aimed at non-parametric robust modelling of non-linear dependencies from empirical data. The computational efficiency of the data-driven methods allows producing the prediction maps in real time which makes them superior to physical models for the operational use in risk assessment and mitigation. Particularly, this situation encounters in spatial prediction of climatic variables (topo-climatic mapping). In complex topographies of the mountainous regions, the meteorological processes are highly influenced by the relief. The article shows how these relations, possibly regionalized and non-linear, can be modelled from data using the information from digital elevation models. The particular illustration of the developed methodology concerns the mapping of temperatures (including the situations of Föhn and temperature inversion) given the measurements taken from the Swiss meteorological monitoring network. The range of the methods used in the study includes data-driven feature selection, support vector algorithms and artificial neural networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In my thesis I present the findings of a multiple-case study on the CSR approach of three multinational companies, applying Basu and Palazzo's (2008) CSR-character as a process model of sensemaking, Suchman's (1995) framework on legitimation strategies, and Habermas (1996) concept of deliberative democracy. The theoretical framework is based on the assumption of a postnational constellation (Habermas, 2001) which sends multinational companies onto a process of sensemaking (Weick, 1995) with regards to their responsibilities in a globalizing world. The major reason is that mainstream CSR-concepts are based on the assumption of a liberal market economy embedded in a nation state that do not fit the changing conditions for legitimation of corporate behavior in a globalizing world. For the purpose of this study, I primarily looked at two research questions: (i) How can the CSR approach of a multinational corporation be systematized empirically? (ii) What is the impact of the changing conditions in the postnational constellation on the CSR approach of the studied multinational corporations? For the analysis, I adopted a holistic approach (Patton, 1980), combining elements of a deductive and inductive theory building methodology (Eisenhardt, 1989b; Eisenhardt & Graebner, 2007; Glaser & Strauss, 1967; Van de Ven, 1992) and rigorous qualitative data analysis. Primary data was collected through 90 semi-structured interviews in two rounds with executives and managers in three multinational companies and their respective stakeholders. Raw data originating from interview tapes, field notes, and contact sheets was processed, stored, and managed using the software program QSR NVIVO 7. In the analysis, I applied qualitative methods to strengthen the interpretative part as well as quantitative methods to identify dominating dimensions and patterns. I found three different coping behaviors that provide insights into the corporate mindset. The results suggest that multinational corporations increasingly turn towards relational approaches of CSR to achieve moral legitimacy in formalized dialogical exchanges with their stakeholders since legitimacy can no longer be derived only from a national framework. I also looked at the degree to which they have reacted to the postnational constellation by the assumption of former state duties and the underlying reasoning. The findings indicate that CSR approaches become increasingly comprehensive through integrating political strategies that reflect the growing (self-) perception of multinational companies as political actors. Based on the results, I developed a model which relates the different dimensions of corporate responsibility to the discussion on deliberative democracy, global governance and social innovation to provide guidance for multinational companies in a postnational world. With my thesis, I contribute to management research by (i) delivering a comprehensive critique of the mainstream CSR-literature and (ii) filling the gap of thorough qualitative research on CSR in a globalizing world using the CSR-character as an empirical device, and (iii) to organizational studies by further advancing a deliberative view of the firm proposed by Scherer and Palazzo (2008).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Intravenously administered antimicrobial agents have been the standard choice for the empirical management of fever in patients with cancer and granulocytopenia. If orally administered empirical therapy is as effective as intravenous therapy, it would offer advantages such as improved quality of life and lower cost. METHODS: In a prospective, open-label, multicenter trial, we randomly assigned febrile patients with cancer who had granulocytopenia that was expected to resolve within 10 days to receive empirical therapy with either oral ciprofloxacin (750 mg twice daily) plus amoxicillin-clavulanate (625 mg three times daily) or standard daily doses of intravenous ceftriaxone plus amikacin. All patients were hospitalized until their fever resolved. The primary objective of the study was to determine whether there was equivalence between the regimens, defined as an absolute difference in the rates of success of 10 percent or less. RESULTS: Equivalence was demonstrated at the second interim analysis, and the trial was terminated after the enrollment of 353 patients. In the analysis of the 312 patients who were treated according to the protocol and who could be evaluated, treatment was successful in 86 percent of the patients in the oral-therapy group (95 percent confidence interval, 80 to 91 percent) and 84 percent of those in the intravenous-therapy group (95 percent confidence interval, 78 to 90 percent; P=0.02). The results were similar in the intention-to-treat analysis (80 percent and 77 percent, respectively; P=0.03), as were the duration of fever, the time to a change in the regimen, the reasons for such a change, the duration of therapy, and survival. The types of adverse events differed slightly between the groups but were similar in frequency. CONCLUSIONS: In low-risk patients with cancer who have fever and granulocytopenia, oral therapy with ciprofloxacin plus amoxicillin-clavulanate is as effective as intravenous therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial data analysis mapping and visualization is of great importance in various fields: environment, pollution, natural hazards and risks, epidemiology, spatial econometrics, etc. A basic task of spatial mapping is to make predictions based on some empirical data (measurements). A number of state-of-the-art methods can be used for the task: deterministic interpolations, methods of geostatistics: the family of kriging estimators (Deutsch and Journel, 1997), machine learning algorithms such as artificial neural networks (ANN) of different architectures, hybrid ANN-geostatistics models (Kanevski and Maignan, 2004; Kanevski et al., 1996), etc. All the methods mentioned above can be used for solving the problem of spatial data mapping. Environmental empirical data are always contaminated/corrupted by noise, and often with noise of unknown nature. That's one of the reasons why deterministic models can be inconsistent, since they treat the measurements as values of some unknown function that should be interpolated. Kriging estimators treat the measurements as the realization of some spatial randomn process. To obtain the estimation with kriging one has to model the spatial structure of the data: spatial correlation function or (semi-)variogram. This task can be complicated if there is not sufficient number of measurements and variogram is sensitive to outliers and extremes. ANN is a powerful tool, but it also suffers from the number of reasons. of a special type ? multiplayer perceptrons ? are often used as a detrending tool in hybrid (ANN+geostatistics) models (Kanevski and Maignank, 2004). Therefore, development and adaptation of the method that would be nonlinear and robust to noise in measurements, would deal with the small empirical datasets and which has solid mathematical background is of great importance. The present paper deals with such model, based on Statistical Learning Theory (SLT) - Support Vector Regression. SLT is a general mathematical framework devoted to the problem of estimation of the dependencies from empirical data (Hastie et al, 2004; Vapnik, 1998). SLT models for classification - Support Vector Machines - have shown good results on different machine learning tasks. The results of SVM classification of spatial data are also promising (Kanevski et al, 2002). The properties of SVM for regression - Support Vector Regression (SVR) are less studied. First results of the application of SVR for spatial mapping of physical quantities were obtained by the authorsin for mapping of medium porosity (Kanevski et al, 1999), and for mapping of radioactively contaminated territories (Kanevski and Canu, 2000). The present paper is devoted to further understanding of the properties of SVR model for spatial data analysis and mapping. Detailed description of the SVR theory can be found in (Cristianini and Shawe-Taylor, 2000; Smola, 1996) and basic equations for the nonlinear modeling are given in section 2. Section 3 discusses the application of SVR for spatial data mapping on the real case study - soil pollution by Cs137 radionuclide. Section 4 discusses the properties of the modelapplied to noised data or data with outliers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives To review the epidemiology of native septic arthritis to establish local guidelines for empirical antibiotic therapy as part of an antibiotic stewardship programme. Methods We conducted a 10 year retrospective study based on positive synovial fluid cultures and discharge diagnosis of septic arthritis in adult patients. Microbiology results and medical records were reviewed. Results Between 1999 and 2008, we identified 233 episodes of septic arthritis. The predominant causative pathogens were methicillin-susceptible Staphylococcus aureus (MSSA) and streptococci (respectively, 44.6% and 14.2% of cases). Only 11 cases (4.7%) of methicillin-resistant S. aureus (MRSA) arthritis were diagnosed, among which 5 (45.5%) occurred in known carriers. For large-joint infections, amoxicillin/clavulanate or cefuroxime would have been appropriate in 84.5% of cases. MRSA and Mycobacterium tuberculosis would have been the most frequent pathogens that would not have been covered. In contrast, amoxicillin/clavulanate would have been appropriate for only 75.3% of small-joint infections (82.6% if diabetics are excluded). MRSA and Pseudomonas aeruginosa would have been the main pathogens not covered. Piperacillin/tazobactam would have been appropriate in 93.8% of cases (P < 0.01 versus amoxicillin/clavulanate). This statistically significant advantage is lost after exclusion of diabetics (P = 0.19). Conclusions Amoxicillin/clavulanate or cefuroxime would be adequate for empirical coverage of large-joint septic arthritis in our area. A broad-spectrum antibiotic would be significantly superior for small-joint infections in diabetics. Systematic coverage of MRSA is not justified, but should be considered for known carriers. These recommendations are applicable to our local setting. They might also apply to hospitals sharing the same epidemiology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: We examined the role of smoking in the two dimensions behind the time trends in adult mortality in European countries, that is, rectangularization of the survival curve (mortality compression) and longevity extension (increase in the age-at-death). METHODS: Using data on national sex-specific populations aged 50 years and older from Denmark, Finland, France, West Germany, Italy, the Netherlands, Norway, Sweden, Switzerland, and the United Kingdom, we studied trends in life expectancy, rectangularity, and longevity from 1950 to 2009 for both all-cause and nonsmoking-related mortality and correlated them with trends in lifetime smoking prevalence. RESULTS: For all-cause mortality, rectangularization accelerated around 1980 among men in all the countries studied, and more recently among women in Denmark and the United Kingdom. Trends in lifetime smoking prevalence correlated negatively with both rectangularization and longevity extension, but more negatively with rectangularization. For nonsmoking-related mortality, rectangularization among men did not accelerate around 1980. Among women, the differences between all-cause mortality and nonsmoking-related mortality were small, but larger for rectangularization than for longevity extension. Rectangularization contributed less to the increase in life expectancy than longevity extension, especially for nonsmoking-related mortality among men. CONCLUSIONS: Smoking affects rectangularization more than longevity extension, both among men and women.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Changes in human lives are studied in psychology, sociology, and adjacent fields as outcomes of developmental processes, institutional regulations and policies, culturally and normatively structured life courses, or empirical accounts. However, such studies have used a wide range of complementary, but often divergent, concepts. This review has two aims. First, we report on the structure that has emerged from scientific life course research by focusing on abstracts from longitudinal and life course studies beginning with the year 2000. Second, we provide a sense of the disciplinary diversity of the field and assess the value of the concept of 'vulnerability' as a heuristic tool for studying human lives. Applying correspondence analysis to 10,632 scientific abstracts, we find a disciplinary divide between psychology and sociology, and observe indications of both similarities of-and differences between-studies, driven at least partly by the data and methods employed. We also find that vulnerability takes a central position in this scientific field, which leads us to suggest several reasons to see value in pursuing theory development for longitudinal and life course studies in this direction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Adequate empirical antibiotic dose selection for critically ill burn patients is difficult due to extreme variability in drug pharmacokinetics. Therapeutic drug monitoring (TDM) may aid antibiotic prescription and implementation of initial empirical antimicrobial dosage recommendations. This study evaluated how gradual TDM introduction altered empirical dosages of meropenem and imipenem/cilastatin in our burn ICU. METHODS: Imipenem/cilastatin and meropenem use and daily empirical dosage at a five-bed burn ICU were analyzed retrospectively. Data for all burn admissions between 2001 and 2011 were extracted from the hospital's computerized information system. For each patient receiving a carbapenem, episodes of infection were reviewed and scored according to predefined criteria. Carbapenem trough serum levels were characterized. Prior to May 2007, TDM was available only by special request. Real-time carbapenem TDM was introduced in June 2007; it was initially available weekly and has been available 4 days a week since 2010. RESULTS: Of 365 patients, 229 (63%) received antibiotics (109 received carbapenems). Of 23 TDM determinations for imipenem/cilastatin, none exceeded the predefined upper limit and 11 (47.8%) were insufficient; the number of TDM requests was correlated with daily dose (r=0.7). Similar numbers of inappropriate meropenem trough levels (30.4%) were below and above the upper limit. Real-time TDM introduction increased the empirical dose of imipenem/cilastatin, but not meropenem. CONCLUSIONS: Real-time carbapenem TDM availability significantly altered the empirical daily dosage of imipenem/cilastatin at our burn ICU. Further studies are needed to evaluate the individual impact of TDM-based antibiotic adjustment on infection outcomes in these patients.