49 resultados para longitudinal data-analysis
Resumo:
Until recently, the hard X-ray, phase-sensitive imaging technique called grating interferometry was thought to provide information only in real space. However, by utilizing an alternative approach to data analysis we demonstrated that the angular resolved ultra-small angle X-ray scattering distribution can be retrieved from experimental data. Thus, reciprocal space information is accessible by grating interferometry in addition to real space. Naturally, the quality of the retrieved data strongly depends on the performance of the employed analysis procedure, which involves deconvolution of periodic and noisy data in this context. The aim of this article is to compare several deconvolution algorithms to retrieve the ultra-small angle X-ray scattering distribution in grating interferometry. We quantitatively compare the performance of three deconvolution procedures (i.e., Wiener, iterative Wiener and Lucy-Richardson) in case of realistically modeled, noisy and periodic input data. The simulations showed that the algorithm of Lucy-Richardson is the more reliable and more efficient as a function of the characteristics of the signals in the given context. The availability of a reliable data analysis procedure is essential for future developments in grating interferometry.
Resumo:
Overall introduction.- Longitudinal studies have been designed to investigate prospectively, from their beginning, the pathway leading from health to frailty and to disability. Knowledge about determinants of healthy ageing and health behaviour (resources) as well as risks of functional decline is required to propose appropriate preventative interventions. The functional status in older people is important considering clinical outcome in general, healthcare need and mortality. Part I.- Results and interventions from lucas (longitudinal urban cohort ageing study). Authors.- J. Anders, U. Dapp, L. Neumann, F. Pröfener, C. Minder, S. Golgert, A. Daubmann, K. Wegscheider,. W. von Renteln-Kruse Methods.- The LUCAS core project is a longitudinal cohort of urban community-dwelling people 60 years and older, recruited in 2000/2001. Further LUCAS projects are cross-sectional comparative and interventional studies (RCT). Results.- The emphasis will be on geriatric medical care in a population-based approach, discussing different forms of access, too. (Dapp et al. BMC Geriatrics 2012, 12:35; http://www.biomedcentral.com/1471-2318/12/35): - longitudinal data from the LUCAS urban cohort (n = 3.326) will be presented covering 10 years of observation, including the prediction of functional decline, need of nursing care, and mortality by using a self-filling screening tool; - interventions to prevent functional decline do focus on first (pre-clinical) signs of pre-frailty before entering the frailty-cascade ("Active Health Promotion in Old Age", "geriatric mobility centre") or disability ("home visits"). Conclusions.- The LUCAS research consortium was established to study particular aspects of functional competence, its changes with ageing, to detect pre-clinical signs of functional decline, and to address questions on how to maintain functional competence and to prevent adverse outcome in different settings. The multidimensional data base allows the exploration of several further questions. Gait performance was exmined by GAITRite®-System. Supported by the Federal Ministry for Education and Research (BMBF Funding No. 01ET1002A). Part II.- Selected results from the lausanne cohort 65+ (Lc65 + ) Study (Switzerland). Authors.- Prof Santos-Eggimann Brigitte, Dr Seematter-Bagnoud Laurence, Prof Büla Christophe, Dr Rochat Stéphane. Methods.- The Lc65+ cohort was launched in 2004 with the random selection of 3054 eligible individuals aged 65 to 70 (birth year 1934-1938) in the non-institutionalized population of Lausanne (Switzerland). Results.- Information is collected about life course social and health-related events, socio-economics, medical and psychosocial dimensions, lifestyle habits, limitations in activities of daily living, mobility impairments, and falls. Gait performance are objectively measured using body-fixed sensors. Frailty is assessed using Fried's frailty phenotype. Follow-up consists in annual self-completed questionnaires, as well as physical examination and physical and mental performance tests every three years. - Lausanne cohort 65+ (Lc65 + ): design and longitudinal outcomes. The baseline data collection was completed among 1422 participants in 2004-2005 through self-completed questionnaires, face-to-face interviews, physical examination and tests of mental and physical performances. Information about institutionalization, self-reported health services utilization, and death is also assessed. An additional random sample (n = 1525) of 65-70 years old subjects was recruited in 2009 (birth year 1939-1943). - lecture no 4: alcohol intake and gait parameters: prevalent and longitudinal association in the Lc65+ study. The association between alcohol intake and gait performance was investigated.
Resumo:
The coverage and volume of geo-referenced datasets are extensive and incessantly¦growing. The systematic capture of geo-referenced information generates large volumes¦of spatio-temporal data to be analyzed. Clustering and visualization play a key¦role in the exploratory data analysis and the extraction of knowledge embedded in¦these data. However, new challenges in visualization and clustering are posed when¦dealing with the special characteristics of this data. For instance, its complex structures,¦large quantity of samples, variables involved in a temporal context, high dimensionality¦and large variability in cluster shapes.¦The central aim of my thesis is to propose new algorithms and methodologies for¦clustering and visualization, in order to assist the knowledge extraction from spatiotemporal¦geo-referenced data, thus improving making decision processes.¦I present two original algorithms, one for clustering: the Fuzzy Growing Hierarchical¦Self-Organizing Networks (FGHSON), and the second for exploratory visual data analysis:¦the Tree-structured Self-organizing Maps Component Planes. In addition, I present¦methodologies that combined with FGHSON and the Tree-structured SOM Component¦Planes allow the integration of space and time seamlessly and simultaneously in¦order to extract knowledge embedded in a temporal context.¦The originality of the FGHSON lies in its capability to reflect the underlying structure¦of a dataset in a hierarchical fuzzy way. A hierarchical fuzzy representation of¦clusters is crucial when data include complex structures with large variability of cluster¦shapes, variances, densities and number of clusters. The most important characteristics¦of the FGHSON include: (1) It does not require an a-priori setup of the number¦of clusters. (2) The algorithm executes several self-organizing processes in parallel.¦Hence, when dealing with large datasets the processes can be distributed reducing the¦computational cost. (3) Only three parameters are necessary to set up the algorithm.¦In the case of the Tree-structured SOM Component Planes, the novelty of this algorithm¦lies in its ability to create a structure that allows the visual exploratory data analysis¦of large high-dimensional datasets. This algorithm creates a hierarchical structure¦of Self-Organizing Map Component Planes, arranging similar variables' projections in¦the same branches of the tree. Hence, similarities on variables' behavior can be easily¦detected (e.g. local correlations, maximal and minimal values and outliers).¦Both FGHSON and the Tree-structured SOM Component Planes were applied in¦several agroecological problems proving to be very efficient in the exploratory analysis¦and clustering of spatio-temporal datasets.¦In this thesis I also tested three soft competitive learning algorithms. Two of them¦well-known non supervised soft competitive algorithms, namely the Self-Organizing¦Maps (SOMs) and the Growing Hierarchical Self-Organizing Maps (GHSOMs); and the¦third was our original contribution, the FGHSON. Although the algorithms presented¦here have been used in several areas, to my knowledge there is not any work applying¦and comparing the performance of those techniques when dealing with spatiotemporal¦geospatial data, as it is presented in this thesis.¦I propose original methodologies to explore spatio-temporal geo-referenced datasets¦through time. Our approach uses time windows to capture temporal similarities and¦variations by using the FGHSON clustering algorithm. The developed methodologies¦are used in two case studies. In the first, the objective was to find similar agroecozones¦through time and in the second one it was to find similar environmental patterns¦shifted in time.¦Several results presented in this thesis have led to new contributions to agroecological¦knowledge, for instance, in sugar cane, and blackberry production.¦Finally, in the framework of this thesis we developed several software tools: (1)¦a Matlab toolbox that implements the FGHSON algorithm, and (2) a program called¦BIS (Bio-inspired Identification of Similar agroecozones) an interactive graphical user¦interface tool which integrates the FGHSON algorithm with Google Earth in order to¦show zones with similar agroecological characteristics.
Resumo:
The use of synthetic combinatorial peptide libraries in positional scanning format (PS-SCL) has emerged recently as an alternative approach for the identification of peptides recognized by T lymphocytes. The choice of both the PS-SCL used for screening experiments and the method used for data analysis are crucial for implementing this approach. With this aim, we tested the recognition of different PS-SCL by a tyrosinase 368-376-specific CTL clone and analyzed the data obtained with a recently developed biometric data analysis based on a model of independent and additive contribution of individual amino acids to peptide antigen recognition. Mixtures defined with amino acids present at the corresponding positions in the native sequence were among the most active for all of the libraries. Somewhat surprisingly, a higher number of native amino acids were identifiable by using amidated COOH-terminal rather than free COOH-terminal PS-SCL. Also, our data clearly indicate that when using PS-SCL longer than optimal, frame shifts occur frequently and should be taken into account. Biometric analysis of the data obtained with the amidated COOH-terminal nonapeptide library allowed the identification of the native ligand as the sequence with the highest score in a public human protein database. However, the adequacy of the PS-SCL data for the identification for the peptide ligand varied depending on the PS-SCL used. Altogether these results provide insight into the potential of PS-SCL for the identification of CTL-defined tumor-derived antigenic sequences and may significantly implement our ability to interpret the results of these analyses.
Resumo:
The focus of my PhD research was the concept of modularity. In the last 15 years, modularity has become a classic term in different fields of biology. On the conceptual level, a module is a set of interacting elements that remain mostly independent from the elements outside of the module. I used modular analysis techniques to study gene expression evolution in vertebrates. In particular, I identified ``natural'' modules of gene expression in mouse and human, and I showed that expression of organ-specific and system-specific genes tends to be conserved between such distance vertebrates as mammals and fishes. Also with a modular approach, I studied patterns of developmental constraints on transcriptome evolution. I showed that none of the two commonly accepted models of the evolution of embryonic development (``evo-devo'') are exclusively valid. In particular, I found that the conservation of the sequences of regulatory regions is highest during mid-development of zebrafish, and thus it supports the ``hourglass model''. In contrast, events of gene duplication and new gene introduction are most rare in early development, which supports the ``early conservation model''. In addition to the biological insights on transcriptome evolution, I have also discussed in detail the advantages of modular approaches in large-scale data analysis. Moreover, I re-analyzed several studies (published in high-ranking journals), and showed that their conclusions do not hold out under a detailed analysis. This demonstrates that complex analysis of high-throughput data requires a co-operation between biologists, bioinformaticians, and statisticians.
Resumo:
Geophysical techniques can help to bridge the inherent gap with regard to spatial resolution and the range of coverage that plagues classical hydrological methods. This has lead to the emergence of the new and rapidly growing field of hydrogeophysics. Given the differing sensitivities of various geophysical techniques to hydrologically relevant parameters and their inherent trade-off between resolution and range the fundamental usefulness of multi-method hydrogeophysical surveys for reducing uncertainties in data analysis and interpretation is widely accepted. A major challenge arising from such endeavors is the quantitative integration of the resulting vast and diverse database in order to obtain a unified model of the probed subsurface region that is internally consistent with all available data. To address this problem, we have developed a strategy towards hydrogeophysical data integration based on Monte-Carlo-type conditional stochastic simulation that we consider to be particularly suitable for local-scale studies characterized by high-resolution and high-quality datasets. Monte-Carlo-based optimization techniques are flexible and versatile, allow for accounting for a wide variety of data and constraints of differing resolution and hardness and thus have the potential of providing, in a geostatistical sense, highly detailed and realistic models of the pertinent target parameter distributions. Compared to more conventional approaches of this kind, our approach provides significant advancements in the way that the larger-scale deterministic information resolved by the hydrogeophysical data can be accounted for, which represents an inherently problematic, and as of yet unresolved, aspect of Monte-Carlo-type conditional simulation techniques. We present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on pertinent synthetic data and then applied to corresponding field data collected at the Boise Hydrogeophysical Research Site near Boise, Idaho, USA.
Resumo:
INTRODUCTION: infants hospitalised in neonatology are inevitably exposed to pain repeatedly. Premature infants are particularly vulnerable, because they are hypersensitive to pain and demonstrate diminished behavioural responses to pain. They are therefore at risk of developing short and long-term complications if pain remains untreated. CONTEXT: compared to acute pain, there is limited evidence in the literature on prolonged pain in infants. However, the prevalence is reported between 20 and 40 %. OBJECTIVE : this single case study aimed to identify the bio-contextual characteristics of neonates who experienced prolonged pain. METHODS : this study was carried out in the neonatal unit of a tertiary referral centre in Western Switzerland. A retrospective data analysis of seven infants' profile, who experienced prolonged pain ,was performed using five different data sources. RESULTS : the mean gestational age of the seven infants was 32weeks. The main diagnosis included prematurity and respiratory distress syndrome. The total observations (N=55) showed that the participants had in average 21.8 (SD 6.9) painful procedures that were estimated to be of moderate to severe intensity each day. Out of the 164 recorded pain scores (2.9 pain assessment/day/infant), 14.6 % confirmed acute pain. Out of those experiencing acute pain, analgesia was given in 16.6 % of them and 79.1 % received no analgesia. CONCLUSION: this study highlighted the difficulty in managing pain in neonates who are exposed to numerous painful procedures. Pain in this population remains underevaluated and as a result undertreated.Results of this study showed that nursing documentation related to pain assessment is not systematic.Regular assessment and documentation of acute and prolonged pain are recommended. This could be achieved with clear guidelines on the Assessment Intervention Reassessment (AIR) cyclewith validated measures adapted to neonates. The adequacy of pain assessment is a pre-requisite for appropriate pain relief in neonates.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
BACKGROUND: Examination of patterns and intensity of physical activity (PA) across cultures where obesity prevalence varies widely provides insight into one aspect of the ongoing epidemiologic transition. The primary hypothesis being addressed is whether low levels of PA are associated with excess weight and adiposity. METHODS: We recruited young adults from five countries (500 per country, 2500 total, ages 25-45 years), spanning the range of obesity prevalence. Men and women were recruited from a suburb of Chicago, Illinois, USA; urban Jamaica; rural Ghana; peri-urban South Africa; and the Seychelles. PA was measured using accelerometry and expressed as minutes per day of moderate-to-vigorous activity or sedentary behavior. RESULTS: Obesity (BMI ≥ 30) prevalence ranged from 1.4% (Ghanaian men) to 63.8% (US women). South African men were the most active, followed by Ghanaian men. Relatively small differences were observed across sites among women; however, women in Ghana accumulated the most activity. Within site-gender sub-groups, the correlation of activity with BMI and other measures of adiposity was inconsistent; the combined correlation across sites was -0.17 for men and -0.11 for women. In the ecological analysis time spent in moderate-to-vigorous activity was inversely associated with BMI (r = -0.71). CONCLUSION: These analyses suggest that persons with greater adiposity tend to engage in less PA, although the associations are weak and the direction of causality cannot be inferred because measurements are cross-sectional. Longitudinal data will be required to elucidate direction of association.
Resumo:
We conducted this study to determine the relative influence of various mechanical and patient-related factors on the incidence of dislocation after primary total hip asthroplasty (THA). Of 2,023 THAs, 21 patients who had at least 1 dislocation were compared with a control group of 21 patients without dislocation, matched for age, gender, pathology, and year of surgery. Implant positioning, seniority of the surgeon, American Society of Anesthesiologists (ASA) score, and diminished motor coordination were recorded. Data analysis included univariate and multivariate methods. The dislocation risk was 6.9 times higher if total anteversion was not between 40 degrees and 60 degrees and 10 times higher in patients with high ASA scores. Surgeons should pay attention to total anteversion (cup and stem) of THA. The ASA score should be part of the preoperative assessment of the dislocation risk.
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Linezolid is used off-label to treat multidrug-resistant tuberculosis (MDR-TB) in absence of systematic evidence. We performed a systematic review and meta-analysis on efficacy, safety and tolerability of linezolid-containing regimes based on individual data analysis. 12 studies (11 countries from three continents) reporting complete information on safety, tolerability, efficacy of linezolid-containing regimes in treating MDR-TB cases were identified based on Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Meta-analysis was performed using the individual data of 121 patients with a definite treatment outcome (cure, completion, death or failure). Most MDR-TB cases achieved sputum smear (86 (92.5%) out of 93) and culture (100 (93.5%) out of 107) conversion after treatment with individualised regimens containing linezolid (median (inter-quartile range) times for smear and culture conversions were 43.5 (21-90) and 61 (29-119) days, respectively) and 99 (81.8%) out of 121 patients were successfully treated. No significant differences were detected in the subgroup efficacy analysis (daily linezolid dosage ≤600 mg versus >600 mg). Adverse events were observed in 63 (58.9%) out of 107 patients, of which 54 (68.4%) out of 79 were major adverse events that included anaemia (38.1%), peripheral neuropathy (47.1%), gastro-intestinal disorders (16.7%), optic neuritis (13.2%) and thrombocytopenia (11.8%). The proportion of adverse events was significantly higher when the linezolid daily dosage exceeded 600 mg. The study results suggest an excellent efficacy but also the necessity of caution in the prescription of linezolid.
Resumo:
OBJECTIVES: Non-steroidal anti-inflammatory drugs (NSAIDs) may cause kidney damage. This study assessed the impact of prolonged NSAID exposure on renal function in a large rheumatoid arthritis (RA) patient cohort. METHODS: Renal function was prospectively followed between 1996 and 2007 in 4101 RA patients with multilevel mixed models for longitudinal data over a mean period of 3.2 years. Among the 2739 'NSAID users' were 1290 patients treated with cyclooxygenase type 2 selective NSAIDs, while 1362 subjects were 'NSAID naive'. Primary outcome was the estimated glomerular filtration rate according to the Cockroft-Gault formula (eGFRCG), and secondary the Modification of Diet in Renal Disease and Chronic Kidney Disease Epidemiology Collaboration formula equations and serum creatinine concentrations. In sensitivity analyses, NSAID dosing effects were compared for patients with NSAID registration in ≤/>50%, ≤/>80% or ≤/>90% of assessments. FINDINGS: In patients with baseline eGFRCG >30 mL/min, eGFRCG evolved without significant differences over time between 'NSAID users' (mean change in eGFRCG -0.87 mL/min/year, 95% CI -1.15 to -0.59) and 'NSAID naive' (-0.67 mL/min/year, 95% CI -1.26 to -0.09, p=0.63). In a multivariate Cox regression analysis adjusted for significant confounders age, sex, body mass index, arterial hypertension, heart disease and for other insignificant factors, NSAIDs were an independent predictor for accelerated renal function decline only in patients with advanced baseline renal impairment (eGFRCG <30 mL/min). Analyses with secondary outcomes and sensitivity analyses confirmed these results. CONCLUSIONS: NSAIDs had no negative impact on renal function estimates but in patients with advanced renal impairment.