316 resultados para Randomized Algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters.A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed.In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements.The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Long-term side-effects and cost of HIV treatment motivate the development of simplified maintenance. Monotherapy with ritonavir-boosted lopinavir (LPV/r-MT) is the most widely studied strategy. However, efficacy of LPV/r-MT in compartments remains to be shown. METHODS: Randomized controlled open-label trial comparing LPV/r-MT with continued treatment for 48 weeks in treated patients with fully suppressed viral load. The primary endpoint was treatment failure in the central nervous system [cerebrospinal fluid (CSF)] and/or genital tract. Treatment failure in blood was defined as two consecutive HIV RNA levels more than 400 copies/ml. RESULTS: The trial was prematurely stopped when six patients on monotherapy (none in continued treatment-arm) demonstrated a viral failure in blood. At study termination, 60 patients were included, 29 randomized to monotherapy and 13 additional patients switched from continued treatment to monotherapy after 48 weeks. All failures occurred in patients with a nadir CD4 cell count below 200/microl and within the first 24 weeks of monotherapy. Among failing patients, all five patients with a lumbar puncture had an elevated HIV RNA load in CSF and four of six had neurological symptoms. Viral load was fully resuppressed in all failing patients after resumption of the original combination therapy. No drug resistant virus was found. The only predictor of failure was low nadir CD4 cell count (P < 0.02). CONCLUSION: Maintenance of HIV therapy with LPV/r alone should not be recommended as a standard strategy; particularly not in patients with a CD4 cell count nadir less than 200/microl. Further studies are warranted to elucidate the role of the central nervous system compartment in monotherapy-failure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Patients with chronic obstructive pulmonary disease (COPD) often develop weight loss, which is associated with increased mortality. Recombinant human growth hormone (rhGH) treatment has been proposed to improve nitrogen balance and to increase muscle strength in these patients. The aim of this study was to assess the effects of rhGH administration on the nutritional status, resting metabolism, muscle strength, exercise tolerance, dyspnea, and subjective well-being of underweight patients with stable COPD. Sixteen patients attending a pulmonary rehabilitation program (age: 66 +/- 9 yr; weight: 77 +/- 7% of ideal body weight; FEV1: 39 +/- 13% of predicted) were randomly treated daily with either 0.15 IU/kg rhGH or placebo during 3 wk in a double-blind fashion. Measurements were made at the beginning (DO) and at the end (D21) of treatment and 2 mo later (D81). Body weight was similar in the two groups during the study, but lean body mass was significantly higher in the rhGH group at D21 (p < 0.01) and D81 (p < 0.05). The increase in lean body mass was 2.3 +/- 1.6 kg in the rhGH group and 1.1 +/- 0.9 kg in the control group at D21 and 1.9 +/- 1.6 kg in the rhGH group and 0.7 +/- 2.1 kg in the control group at D81. At D21, the resting energy expenditure was increased in the rhGH group (107.8% of DO, p < 0.001 compared with the control group). At D21 and D81, the changes in maximal respiratory pressures, handgrip strength, maximal exercise capacity, and subjective well-being were similar in the two groups. At D21, the 6-min walking distance decreased in the rhGH group (-13 +/- 31%) and increased in the control group (+10 +/- 14%; p < 0.01). We conclude that the daily administration of 0.15 IU/kg rhGH during 3 wk increases lean body mass but does not improve muscle strength or exercise tolerance in underweight patients with COPD.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND AND STUDY AIMS: Removal of colorectal polyps is routinely performed during withdrawal of the endoscope. However, polyps detected during insertion of the colonoscope may be missed at withdrawal. We aimed to evaluate whether polypectomy during both insertion and withdrawal increases polyp detection and removal rates compared with polypectomy at withdrawal only, and to assess the duration of both approaches. PATIENTS AND METHODS: Patients were included into the study when the first polyp was detected, and randomized into two groups; in group A, polyps ≤ 10 mm in diameter were removed during insertion and withdrawal of the colonoscope, while in group B, these polyps were removed at withdrawal only. Main outcome measures were duration of colonoscopy, number of polyps detected during insertion but not recovered during withdrawal, technical ease, patient discomfort, and complications. RESULTS: 150 patients were randomized to group A and 151 to group B. Mean (± standard deviation [SD]) duration of colonoscopy did not differ between the groups (30.8 ± 15.6 min [A] vs. 28.5 ± 13.8 min [B], P = 0.176). In group A 387 polyps (mean 2.58 per colonoscopy) were detected and removed compared with 389 polyps detected (mean 2.58 per colonoscopy) in group B of which 376 were removed (13 polyps were missed, mean size [SD] 3.2 [1.3] mm; 7.3 % of patients). Patient tolerance was similar in the two groups. CONCLUSIONS: Removal of polyps ≤ 10 mm during withdrawal only is associated with a considerable polyp miss rate. We therefore recommend that these polyps are removed during both insertion and withdrawal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Biodegradable polymers for release of antiproliferative drugs from metallic drug-eluting stents aim to improve long-term vascular healing and efficacy. We designed a large scale clinical trial to compare a novel thin strut, cobalt-chromium drug-eluting stent with silicon carbide-coating releasing sirolimus from a biodegradable polymer (O-SES, Orsiro; Biotronik, Bülach, Switzerland) with the durable polymer-based Xience Prime/Xpedition everolimus-eluting stent (EES) (Xience Prime/Xpedition stent, Abbott Vascular, IL) in an all-comers patient population. DESIGN: The multicenter BIOSCIENCE trial (NCT01443104) randomly assigned 2,119 patients to treatment with biodegradable polymer sirolimus-eluting stents (SES) or durable polymer EES at 9 sites in Switzerland. Patients with chronic stable coronary artery disease or acute coronary syndromes, including non-ST-elevation and ST-elevation myocardial infarction, were eligible for the trial if they had at least 1 lesion with a diameter stenosis >50% appropriate for coronary stent implantation. The primary end point target lesion failure (TLF) is a composite of cardiac death, target vessel myocardial infarction, and clinically driven target lesion revascularization within 12 months. Assuming a TLF rate of 8% at 12 months in both treatment arms and accepting 3.5% as a margin for noninferiority, inclusion of 2,060 patients would provide more than 80% power to detect noninferiority of the biodegradable polymer SES compared with the durable polymer EES at a 1-sided type I error of 0.05. Clinical follow-up will be continued through 5 years. CONCLUSION: The BIOSCIENCE trial will determine whether the biodegradable polymer SES is noninferior to the durable polymer EES with respect to TLF.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: The Advisa MRI system is designed to safely undergo magnetic resonance imaging (MRI). Its influence on image quality is not well known. OBJECTIVE: To evaluate cardiac magnetic resonance (CMR) image quality and to characterize myocardial contraction patterns by using the Advisa MRI system. METHODS: In this international trial with 35 participating centers, an Advisa MRI system was implanted in 263 patients. Of those, 177 were randomized to the MRI group and 150 underwent MRI scans at the 9-12-week visit. Left ventricular (LV) and right ventricular (RV) cine long-axis steady-state free precession MR images were graded for quality. Signal loss along the implantable pulse generator and leads was measured. The tagging CMR data quality was assessed as the percentage of trackable tagging points on complementary spatial modulation of magnetization acquisitions (n=16) and segmental circumferential fiber shortening was quantified. RESULTS: Of all cine long-axis steady-state free precession acquisitions, 95% of LV and 98% of RV acquisitions were of diagnostic quality, with 84% and 93%, respectively, being of good or excellent quality. Tagging points were trackable from systole into early diastole (360-648 ms after the R-wave) in all segments. During RV pacing, tagging demonstrated a dyssynchronous contraction pattern, which was not observed in nonpaced (n = 4) and right atrial-paced (n = 8) patients. CONCLUSIONS: In the Advisa MRI study, high-quality CMR images for the assessment of cardiac anatomy and function were obtained in most patients with an implantable pacing system. In addition, this study demonstrated the feasibility of acquiring tagging data to study the LV function during pacing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Ischemic stroke is the leading cause of mortality worldwide and a major contributor to neurological disability and dementia. Terutroban is a specific TP receptor antagonist with antithrombotic, antivasoconstrictive, and antiatherosclerotic properties, which may be of interest for the secondary prevention of ischemic stroke. This article describes the rationale and design of the Prevention of cerebrovascular and cardiovascular Events of ischemic origin with teRutroban in patients with a history oF ischemic strOke or tRansient ischeMic Attack (PERFORM) Study, which aims to demonstrate the superiority of the efficacy of terutroban versus aspirin in secondary prevention of cerebrovascular and cardiovascular events. METHODS AND RESULTS: The PERFORM Study is a multicenter, randomized, double-blind, parallel-group study being carried out in 802 centers in 46 countries. The study population includes patients aged > or =55 years, having suffered an ischemic stroke (< or =3 months) or a transient ischemic attack (< or =8 days). Participants are randomly allocated to terutroban (30 mg/day) or aspirin (100 mg/day). The primary efficacy endpoint is a composite of ischemic stroke (fatal or nonfatal), myocardial infarction (fatal or nonfatal), or other vascular death (excluding hemorrhagic death of any origin). Safety is being evaluated by assessing hemorrhagic events. Follow-up is expected to last for 2-4 years. Assuming a relative risk reduction of 13%, the expected number of primary events is 2,340. To obtain statistical power of 90%, this requires inclusion of at least 18,000 patients in this event-driven trial. The first patient was randomized in February 2006. CONCLUSIONS: The PERFORM Study will explore the benefits and safety of terutroban in secondary cardiovascular prevention after a cerebral ischemic event.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE/OBJECTIVE(S): To analyze the long-term outcome of treatment with concomitant cisplatin and hyperfractionated radiotherapy in locally advanced head and neck cancer compared with hyperfractionated radiotherapy alone. MATERIALS/METHODS: From July 1994 to July 2000 a total of 224 patients with squamous cell carcinoma of the head and neck were randomized to either hyperfractionated radiotherapy (median dose 74.4 Gy; 1.2 Gy twice daily) or the same radiotherapy combined with two cycles of concomitant cisplatin (20mg/m2 for 5 consecutive days of weeks 1 and 5). The primary endpoint was time to any treatment failure; secondary endpoints were locoregional failure, metastatic failure, overall survival, and late toxicity assessed according to RTOG criteria. The trial was registered at the National Institutes of Health (www.clinicaltrials.gov; identifier number: NCT00002654). RESULTS: Median follow-up was 9.5 years (range, 0.1 - 15.4 years). Median time to any treatment failure was not significantly different between treatment arms (p = 0.19). Locoregional control (p\0.05), distant metastasis-free survival (p = 0.02) and cancer specific survival (p = 0.03) were significantly improved in the combined treatment arm, with no difference in late toxicity between treatment arms. However, overall survival was not significantly different (p = 0.19). CONCLUSIONS: After long-term follow-up combined treatment with cisplatin and hyperfractionated, radiotherapy maintained an improved locoregional control, distant metastasis-free survival, and cancer specific survival as compared to hyperfractionated radiotherapy alone with no difference in late toxicity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: To assess the difference in direct medical costs between on-demand (OD) treatment with esomeprazole (E) 20 mg and continuous (C) treatment with E 20 mg q.d. from a clinical practice view in patients with gastroesophageal reflux disease (GERD) symptoms. Methods: This open, randomized study (ONE: on-demand Nexium evaluation) compared two long-term management options with E 20 mg in endoscopically uninvestigated patients seeking primary care for GERD symptoms who demonstrated complete relief of symptoms after an initial treatment of 4 weeks with E 40 mg. Data on consumed quantities of all cost items were collected in the study, while data on prices during the time of study were collected separately. The analysis was done from a societal perspective. Results: Forty-nine percent (484 of 991) of patients randomized to the OD regimen and 46% (420 of 913) of the patients in the C group had at least one contact with the investigator that would have occurred nonprotocol-driven. The difference of the adjusted mean direct medical costs between the treatment groups was CHF 88.72 (95% confidence interval: CHF 41.34-153.95) in favor of the OD treatment strategy (Wilcoxon rank-sum test: P < 0.0001). Adjusted direct nonmedical costs and productivity loss were similar in both groups. Conclusions: The adjusted direct medical costs of a 6-month OD treatment with esomeprazole 20 mg in uninvestigated patients with symptoms of GERD were significantly lower compared with a continuous treatment with E 20 mg once a day. The OD therapy represents a cost-saving alternative to the continuous treatment strategy with E.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: The European Organisation for Research and Treatment of Cancer and National Cancer Institute of Canada trial on temozolomide (TMZ) and radiotherapy (RT) in glioblastoma (GBM) has demonstrated that the combination of TMZ and RT conferred a significant and meaningful survival advantage compared with RT alone. We evaluated in this trial whether the recursive partitioning analysis (RPA) retains its overall prognostic value and what the benefit of the combined modality is in each RPA class. PATIENTS AND METHODS: Five hundred seventy-three patients with newly diagnosed GBM were randomly assigned to standard postoperative RT or to the same RT with concomitant TMZ followed by adjuvant TMZ. The primary end point was overall survival. The European Organisation for Research and Treatment of Cancer RPA used accounts for age, WHO performance status, extent of surgery, and the Mini-Mental Status Examination. RESULTS: Overall survival was statistically different among RPA classes III, IV, and V, with median survival times of 17, 15, and 10 months, respectively, and 2-year survival rates of 32%, 19%, and 11%, respectively (P < .0001). Survival with combined TMZ/RT was higher in RPA class III, with 21 months median survival time and a 43% 2-year survival rate, versus 15 months and 20% for RT alone (P = .006). In RPA class IV, the survival advantage remained significant, with median survival times of 16 v 13 months, respectively, and 2-year survival rates of 28% v 11%, respectively (P = .0001). In RPA class V, however, the survival advantage of RT/TMZ was of borderline significance (P = .054). CONCLUSION: RPA retains its prognostic significance overall as well as in patients receiving RT with or without TMZ for newly diagnosed GBM, particularly in classes III and IV.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Despite advances in treatment, survival of patients with GBM over 60 years is still often less than 1 year. In the perspective of a short expected survival, the quality of the remaining life and the effects of therapy on health-related quality of life (HRQoL) should be given special emphasis when recommending treatment for the individual patients. Several studies have focused on survival of the elderly, but few data are available on HRQoL for different treatments. In a randomized trial, we compared survival and HRQoL for 3 treatment options, 6 weeks of RT, vs hypofractionated RT, or chemotherapy with TMZ. MATERIALS AND METHODS: Newly diagnosed GBM patients, age ≥60 years with PS 0-2, were randomized to either standard RT (60 Gy in 2-Gy fractions over 6 weeks), hypofractionated RT (34 Gy in 3.4-Gy fractions over 2 weeks), or 6 cycles of chemotherapy with TMZ (200 mg/m2 day 1-5 every 28 days). QoL was determined by the EORTC QLQ 30 questionnaire and the Brain Cancer Module at inclusion, before start of therapy, at 6 weeks, 3 months, and 6 months after start of treatment. Patients were followed until death. The primary study endpoint was overall survival (OS) and secondary objectives were HRQoL, neurologic symptom control, and safety. RESULTS: A total of 342 patients were included and 292 patients were randomized between the 3 treatment options and 50 patients between hypofractionated RT and TMZ. Median age was 70 years (range 60-92) with 58% being male. Performance status was 0-1 for 75% of patients and 73% had undergone surgical resection. CONCLUSION: The results from the HRQoL analysis of this trial will be presented together with survival data at the upcoming EANO meeting.