238 resultados para Predicting treatment time
Resumo:
ABSTRACT: Invasive candidiasis is a frequent life-threatening complication in critically ill patients. Early diagnosis followed by prompt treatment aimed at improving outcome by minimizing unnecessary antifungal use remains a major challenge in the ICU setting. Timely patient selection thus plays a key role for clinically efficient and cost-effective management. Approaches combining clinical risk factors and Candida colonization data have improved our ability to identify such patients early. While the negative predictive value of scores and predicting rules is up to 95 to 99%, the positive predictive value is much lower, ranging between 10 and 60%. Accordingly, if a positive score or rule is used to guide the start of antifungal therapy, many patients may be treated unnecessarily. Candida biomarkers display higher positive predictive values; however, they lack sensitivity and are thus not able to identify all cases of invasive candidiasis. The (1→3)-β-D-glucan (BG) assay, a panfungal antigen test, is recommended as a complementary tool for the diagnosis of invasive mycoses in high-risk hemato-oncological patients. Its role in the more heterogeneous ICU population remains to be defined. More efficient clinical selection strategies combined with performant laboratory tools are needed in order to treat the right patients at the right time by keeping costs of screening and therapy as low as possible. The new approach proposed by Posteraro and colleagues in the previous issue of Critical Care meets these requirements. A single positive BG value in medical patients admitted to the ICU with sepsis and expected to stay for more than 5 days preceded the documentation of candidemia by 1 to 3 days with an unprecedented diagnostic accuracy. Applying this one-point fungal screening on a selected subset of ICU patients with an estimated 15 to 20% risk of developing candidemia is an appealing and potentially cost-effective approach. If confirmed by multicenter investigations, and extended to surgical patients at high risk of invasive candidiasis after abdominal surgery, this Bayesian-based risk stratification approach aimed at maximizing clinical efficiency by minimizing health care resource utilization may substantially simplify the management of critically ill patients at risk of invasive candidiasis.
Resumo:
Aims: The pivotal FREEDOM study evaluated the effi cacy and safety of 3 years' denosumab treatment in women with postmenopausal osteoporosis (PMO).1 Since osteoporosis is a chronic condition requiring long-term therapy, FREEDOM was extended to further elucidate the safety and effi cacy of long-term denosumab administration. We present data from the fi rst 2 years of this extension, representing up to 5 years' continuous exposure to denosumab.Methods: Patients who completed FREEDOM were eligible for the extension. Women continued to receive (long-term group), or started after 3 years' placebo (cross-over group), denosumab 60 mg sc every 6 months and daily calcium and vitamin D. These data refl ect 5 years' (long-term) or 2 years' (cross-over) continuous denosumab treatment. Effi cacy measures include changes in BMD from extension study baseline and bone turnover markers (BTM). P-values are descriptive.Results: Of the 83.0% of subjects who completed FREEDOM, 70.2% (N = 4550) agreed to participate in the extension (long-term: 2343; cross-over: 2207). In the long-term group, there were further signifi cant gains (P < 0.0001) in BMD in years 4 and 5: 1.9% and 1.7% at the lumbar spine to a total of 13.7% from FREEDOM baseline and 0.7% and 0.6% at the total hip to a total of 7.0%. During their fi rst 2 years' denosumab treatment, women in the cross-over group had signifi cant improvements in lumbar spine (7.9%) and total hip BMD (4.1%) (P < 0.0001). Serum C-telopeptide (CTX) was rapidly reduced following denosumab dosing in both groups, with the characteristic attenuation of CTX reduction observed at the end of the dosing interval. A low incidence of new vertebral and nonvertebral fractures was reported for both groups. The denosumab safety profi le did not change over time.Conclusions: Denosumab treatment for up to 5 years in women with PMO remains well tolerated, maintains reduction of BTMs and continues to significantly increase BMD.Reference1. Cummings. NEJM 2009;361:756.
Resumo:
Pelvic external radiotherapy with or without brachytherapy plays an important role in the management of pelvic cancers. Despite recent technical innovations including conformal three-dimensional (3D) external beam radiotherapy and more recently intensity modulated radiotherapy (IMRT), local side effects can occur secondary to normal tissue damage caused by ionising radiation. Morbidity depends on the anatomic position of the rectum within the pelvis and the fast turnover rate of the mucosa, as well as the characteristics of the radiation treatment and patient co-morbidities. Medical management is sometimes complex and merits herein a short review.
Resumo:
PURPOSE: Ipilimumab is a monoclonal antibody that blocks the immune-inhibitory interaction between CTL antigen 4 (CTLA-4) and its ligands on T cells. Clinical trials in cancer patients with ipilimumab have shown promising antitumor activity, particularly in patients with advanced melanoma. Often, tumor regressions in these patients are correlated with immune-related side effects such as dermatitis, enterocolitis, and hypophysitis. Although these reactions are believed to be immune-mediated, the antigenic targets for the cellular or humoral immune response are not known. EXPERIMENTAL DESIGN: We enrolled patients with advanced melanoma in a phase II study with ipilimumab. One of these patients experienced a complete remission of his tumor. The specificity and functional properties of CD8-positive T cells in his peripheral blood, in regressing tumor tissue, and at the site of an immune-mediated skin rash were investigated. RESULTS: Regressing tumor tissue was infiltrated with CD8-positive T cells, a high proportion of which were specific for Melan-A. The skin rash was similarly infiltrated with Melan-A-specific CD8-positive T cells, and a dramatic (>30-fold) increase in Melan-A-specific CD8-positive T cells was apparent in peripheral blood. These cells had an effector phenotype and lysed Melan-A-expressing tumor cells. CONCLUSIONS: Our results show that Melan-A may be a major target for both the autoimmune and antitumor reactions in patients treated with anti-CTLA-4, and describe for the first time the antigen specificity of CD8-positive T cells that mediate tumor rejection in a patient undergoing treatment with an anti-CTLA-4 antibody. These findings may allow a better integration of ipilimumab into other forms of immunotherapy.
Resumo:
OBJECTIVE: Virologic failure of HIV-positive patients is of special concern during pregnancy. We compared virologic failure and the frequency of treatment changes in pregnant and non-pregnant women of the Swiss HIV Cohort Study. METHODS: Using data on 372 pregnancies in 324 women we describe antiretroviral therapy during pregnancy. Pregnant women on HAART at conception (n = 131) were matched to 228 non-pregnant women (interindividual comparison) and to a time period of equal length before and after pregnancy (intraindividual comparison). Women starting HAART during pregnancy (n = 145) were compared with 578 non-pregnant women starting HAART. FINDINGS: The median age at conception was 31 years, 16% (n = 50) were infected through injecting drug use and the median CD4 cell count was 489 cells/microl. In the majority of pregnancies (n = 220, 59%), women had started ART before conception. When ART was started during pregnancy (n = 145, 39%), it was mainly during the second trimester (n = 100, 69%). Two thirds (n = 26) of 35 women starting in the third trimester were diagnosed with HIV during pregnancy. The risk of virologic failure tended to be lower in pregnant than in non-pregnant women [adjusted odds ratio 0.52 (95% confidence interval 0.25-1.09, P = 0.08)], but was similar in the intraindividual comparison (adjusted odds ratio 1.04, 95% confidence interval 0.48-2.28). Women starting HAART during pregnancy changed the treatment less often than non-pregnant women. CONCLUSION: Despite the physiological changes occurring during pregnancy, HIV infected pregnant women are not at higher risk of virologic failure.
Resumo:
Background: Atazanavir boosted with ritonavir (ATV/r) and efavirenz (EFV) are both recommended as first-line therapies for HIV-infected patients. We compared the 2 therapies for virologic efficacy and immune recovery. Methods: We included all treatment-naïve patients in the Swiss HIV Cohort Study starting therapy after May 2003 with either ATV/r or EFV and a backbone of tenofovir and either emtricitabine or lamivudine. We used Cox models to assess time to virologic failure and repeated measures models to assess the change in CD4 cell counts over time. All models were fit as marginal structural models using both point of treatment and censoring weights. Intent-to-treat and various as-treated analyses were carried out: In the latter, patients were censored at their last recorded measurement if they changed therapy or if they were no longer adherent to therapy. Results: Patients starting EFV (n = 1,097) and ATV/r (n = 384) were followed for a median of 35 and 37 months, respectively. During follow-up, 51% patients on EFV and 33% patients on ATV/r remained adherent and made no change to their first-line therapy. Although intent-to-treat analyses suggest virologic failure was more likely with ATV/r, there was no evidence for this disadvantage in patients who adhered to first-line therapy. Patients starting ATV/r had a greater increase in CD4 cell count during the first year of therapy, but this advantage disappeared after one year. Conclusions: In this observational study, there was no good evidence of any intrinsic advantage for one therapy over the other, consistent with earlier clinical trials. Differences between therapies may arise in a clinical setting because of differences in adherence to therapy.
Resumo:
OBJECTIVES: Gouty arthritis patients for whom non-steroidal anti-inflammatory drugs and colchicine are inappropriate have limited treatment options. Canakinumab, an anti-interleukin-1β monoclonal antibody, may be an option for such patients. The authors assessed the efficacy/safety of one dose of canakinumab 150 mg (n=230) or triamcinolone acetonide (TA) 40 mg (n=226) at baseline and upon a new flare in frequently flaring patients contraindicated for, intolerant of, or unresponsive to non-steroidal anti-inflammatory drugs and/or colchicine. Core study co-primary endpoints were pain intensity 72 h postdose (0-100 mm visual analogue scale and time to first new flare. METHODS: Two 12-week randomised, multicentre, active-controlled, double-blind, parallel-group core studies with double-blind 12-week extensions (response in acute flare and in prevention of episodes of re-flare in gout (β-RELIEVED and β-RELIEVED-II)). RESULTS: 82.6% patients had comorbidities. Mean 72-h visual analogue scale pain score was lower with canakinumab (25.0 mm vs 35.7 mm; difference, -10.7 mm; 95% CI -15.4 to -6.0; p<0.0001), with significantly less physician-assessed tenderness and swelling (ORs=2.16 and 2.74; both p≤0.01) versus TA. Canakinumab significantly delayed time to first new flare, reduced the risk of new flares by 62% versus TA (HR: 0.38; 95% CI 0.26 to 0.57) in the core studies and by 56% (HR: 0.44; 95% CI 0.32 to 0.60; both p≤0.0001) over the entire 24-week period, and decreased median C-reactive protein levels (p≤0.0001 at 72 h and 7 days). Over the 24-week period, adverse events were reported in 66.2% (canakinumab) and 52.8% (TA) and serious adverse events were reported in 8.0% (canakinumab) and 3.5% (TA) of patients. Adverse events reported more frequently with canakinumab included infections, low neutrophil count and low platelet count. CONCLUSION: Canakinumab provided significant pain and inflammation relief and reduced the risk of new flares in these patients with acute gouty arthritis.
Resumo:
OBJECTIVE: To assess the post-ischemic skin blood flow response after withdrawal of antihypertensive therapy in hypertensive patients with normal blood pressure during treatment. DESIGN AND METHODS: Twenty hypertensive patients (group A) with a normal clinic blood pressure (<140/ 90 mmHg) receiving antihypertensive treatment (any monotherapy; one pill per day for at least 6 months) had their treatment discontinued. Before medication withdrawal and 2, 4, 12 and 24 weeks thereafter, the following measurements were made: clinic blood pressure, home blood pressure (three times per week, morning and evening) and skin blood flow response to a 5 min forearm arterial occlusion (using laser Doppler flowmetry). The patients were asked to perform an ambulatory blood pressure recording at any time if home blood pressure was > or =160/95 mmHg on two consecutive days, and treatment was initiated again, after determination of the skin hyperemic response, if daytime ambulatory blood pressure was > or =140/90 mmHg. The same studies were performed in 20 additional hypertensive individuals in whom antihypertensive treatment was not withdrawn (group B). The allocation of patients to groups A and B was random. RESULTS: The data fom 18 patients in group A who adhered strictly to the procedure were available for analysis. Seven of them had to start treatment again within the first 4 weeks of follow-up; four additional patients started treatment again during the next 8 weeks (group A1). The seven other patients remained untreated (group A2). The skin hyperemic response decreased significantly in patients in group A1 and returned to baseline values at the end of the study, when there were again receiving antihypertensive treatment. In patients in group A2 a significant attenuation of the hyperemic response was also observed. This impaired response was present even at the end of the 6 month follow-up, at which time the patients were still untreated but exhibited a significantly greater blood pressure than before drug discontinuation. The hyperemic response of patients who did not stop treatment (group B) did not change during the course of the study. CONCLUSIONS: Our findings show a decrease in the postischemic skin blood flow response after withdrawal of antihypertensive treatment in hypertensive patients. This impaired response may be due to the development of endothelial dysfunction, vascular remodeling, or both, and might contribute to the return of blood pressure to hypertensive values after withdrawal of antihypertensive therapy.
Resumo:
Effective empirical treatment is of paramount importance to improve the outcome of patients with Staphylococcus aureus bacteraemia. We aimed to evaluate a PCR-based rapid diagnosis of methicillin resistance (GeneXpert MRSA) after early detection of S. aureus bacteraemia using matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS). Patients with a first episode of S. aureus bacteraemia identified using MALDI-TOF MS were randomized in a prospective interventional open study between October 2010 and August 2012. In the control group, antibiotic susceptibility testing was performed after MALDI-TOF MS identification on blood culture pellets. In the intervention group, a GeneXpert MRSA was performed after S. aureus identification. The primary outcome was the performance of GeneXpert MRSA directly on blood cultures. We then assessed the impact of early diagnosis of methicillin resistance on the empirical treatment. In all, 197 episodes of S. aureus bacteraemia were included in the study, of which 106 were included in the intervention group. Median time from MALDI-TOF MS identification to GeneXpert MRSA result was 97 min (range 25-250). Detection of methicillin resistance using GeneXpert MRSA had a sensitivity of 99% and a specificity of 100%. There was less unnecessary coverage of MRSA in the intervention group (17.1% versus 29.2%, p 0.09). GeneXpert MRSA was highly reliable in diagnosing methicillin resistance when performed directly on positive blood cultures. This could help to avoid unnecessary prescriptions of anti-MRSA agents and promote the introduction of earlier adequate coverage in unsuspected cases.
Resumo:
Niche conservatism, the tendency of a species niche to remain unchanged over time, is often assumed when discussing, explaining or predicting biogeographical patterns. Unfortunately, there has been no basis for predicting niche dynamics over relevant timescales, from tens to a few hundreds of years. The recent application of species distribution models (SDMs) and phylogenetic methods to analysis of niche characteristics has provided insight to niche dynamics. Niche shifts and conservatism have both occurred within the last 100 years, with recent speciation events, and deep within clades of species. There is increasing evidence that coordinated application of these methods can help to identify species which likely fulfill one key assumption in the predictive application of SDMs: an unchanging niche. This will improve confidence in SDM-based predictions of the impacts of climate change and species invasions on species distributions and biodiversity.
Resumo:
Understanding the distribution and composition of species assemblages and being able to predict them in space and time are highly important tasks io investigate the fate of biodiversity in the current global changes context. Species distribution models are tools that have proven useful to predict the potential distribution of species by relating their occurrences to environmental variables. Species assemblages can then be predicted by combining the prediction of individual species models. In the first part of my thesis, I tested the importance of new environmental predictors to improve species distribution prediction. I showed that edaphic variables, above all soil pH and nitrogen content could be important in species distribution models. In a second chapter, I tested the influence of different resolution of predictors on the predictive ability of species distribution models. I showed that fine resolution predictors could ameliorate the models for some species by giving a better estimation of the micro-topographic condition that species tolerate, but that fine resolution predictors for climatic factors still need to be ameliorated. The second goal of my thesis was to test the ability of empirical models to predict species assemblages' characteristics such as species richness or functional attributes. I showed that species richness could be modelled efficiently and that the resulting prediction gave a more realistic estimate of the number of species than when obtaining it by stacking outputs of single species distribution models. Regarding the prediction of functional characteristics (plant height, leaf surface, seed mass) of plant assemblages, mean and extreme values of functional traits were better predictable than indices reflecting the diversity of traits in the community. This approach proved interesting to understand which environmental conditions influence particular aspects of the vegetation functioning. It could also be useful to predict climate change impacts on the vegetation. In the last part of my thesis, I studied the capacity of stacked species distribution models to predict the plant assemblages. I showed that this method tended to over-predict the number of species and that the composition of the community was not predicted exactly either. Finally, I combined the results of macro- ecological models obtained in the preceding chapters with stacked species distribution models and showed that this approach reduced significantly the number of species predicted and that the prediction of the composition is also ameliorated in some cases. These results showed that this method is promising. It needs now to be tested on further data sets. - Comprendre la manière dont les plantes se répartissent dans l'environnement et s'organisent en communauté est une question primordiale dans le contexte actuel de changements globaux. Cette connaissance peut nous aider à sauvegarder la diversité des espèces et les écosystèmes. Des méthodes statistiques nous permettent de prédire la distribution des espèces de plantes dans l'espace géographique et dans le temps. Ces modèles de distribution d'espèces, relient les occurrences d'une espèce avec des variables environnementales pour décrire sa distribution potentielle. Cette méthode a fait ses preuves pour ce qui est de la prédiction d'espèces individuelles. Plus récemment plusieurs tentatives de cumul de modèles d'espèces individuelles ont été réalisées afin de prédire la composition des communautés végétales. Le premier objectif de mon travail est d'améliorer les modèles de distribution en testant l'importance de nouvelles variables prédictives. Parmi différentes variables édaphiques, le pH et la teneur en azote du sol se sont avérés des facteurs non négligeables pour prédire la distribution des plantes. Je démontre aussi dans un second chapitre que les prédicteurs environnementaux à fine résolution permettent de refléter les conditions micro-topographiques subies par les plantes mais qu'ils doivent encore être améliorés avant de pouvoir être employés de manière efficace dans les modèles. Le deuxième objectif de ce travail consistait à étudier le développement de modèles prédictifs pour des attributs des communautés végétales tels que, par exemple, la richesse en espèces rencontrée à chaque point. Je démontre qu'il est possible de prédire par ce biais des valeurs de richesse spécifiques plus réalistes qu'en sommant les prédictions obtenues précédemment pour des espèces individuelles. J'ai également prédit dans l'espace et dans le temps des caractéristiques de la végétation telles que sa hauteur moyenne, minimale et maximale. Cette approche peut être utile pour comprendre quels facteurs environnementaux promeuvent différents types de végétation ainsi que pour évaluer les changements à attendre au niveau de la végétation dans le futur sous différents régimes de changements climatiques. Dans une troisième partie de ma thèse, j'ai exploré la possibilité de prédire les assemblages de plantes premièrement en cumulant les prédictions obtenues à partir de modèles individuels pour chaque espèce. Cette méthode a le défaut de prédire trop d'espèces par rapport à ce qui est observé en réalité. J'ai finalement employé le modèle de richesse en espèce développé précédemment pour contraindre les résultats du modèle d'assemblage de plantes. Cela a permis l'amélioration des modèles en réduisant la sur-prédiction et en améliorant la prédiction de la composition en espèces. Cette méthode semble prometteuse mais de nouveaux tests sont nécessaires pour bien évaluer ses capacités.
Resumo:
RÉSUMÉ Le but d'un traitement antimicrobien est d'éradiquer une infection bactérienne. Cependant, il est souvent difficile d'en évaluer rapidement l'efficacité en utilisant les techniques standard. L'estimation de la viabilité bactérienne par marqueurs moléculaires permettrait d'accélérer le processus. Ce travail étudie donc la possibilité d'utiliser le RNA ribosomal (rRNA) à cet effet. Des cultures de Streptococcus gordonii sensibles (parent Wt) et tolérants (mutant Tol 1) à l'action bactéricide de la pénicilline ont été exposées à différents antibiotiques. La survie bactérienne au cours du temps a été déterminée en comparant deux méthodes. La méthode de référence par compte viable a été comparée à une méthode moléculaire consistant à amplifier par PCR quantitative en temps réel une partie du génome bactérien. La cible choisie devait refléter la viabilité cellulaire et par conséquent être synthétisée de manière constitutive lors de la vie de la bactérie et être détruite rapidement lors de la mort cellulaire. Le choix s'est porté sur un fragment du gène 16S-rRNA. Ce travail a permis de valider ce choix en corrélant ce marqueur moléculaire à la viabilité bactérienne au cours d'un traitement antibiotique bactéricide. De manière attendue, les S. gordonii sensibles à la pénicilline ont perdu ≥ 4 log10 CFU/ml après 48 heures de traitement par pénicilline alors que le mutant tolérant Tol1 en a perdu ≥ 1 log10 CFU/ml. De manière intéressant, la quantité de marqueur a augmenté proportionnellement au compte viable durant la phase de croissance bactérienne. Après administration du traitement antibiotique, l'évolution du marqueur dépendait de la capacité de la bactérie à survivre à l'action de l'antibiotique. Stable lors du traitement des souches tolérantes, la quantité de marqueur détectée diminuait de manière proportionnelle au compte viable lors du traitement des souches sensibles. Cette corrélation s'est confirmée lors de l'utilisation d'autres antibiotiques bactéricides. En conclusion, l'amplification par PCR du RNA ribosomal 16S permet d'évaluer rapidement la viabilité bactérienne au cours d'un traitement antibiotique en évitant le recours à la mise en culture dont les résultats ne sont obtenus qu'après plus de 24 heures. Cette méthode offre donc au clinicien une évaluation rapide de l'efficacité du traitement, particulièrement dans les situations, comme le choc septique, où l'initiation sans délai d'un traitement efficace est une des conditions essentielles du succès thérapeutique. ABSTRACT Assessing bacterial viability by molecular markers might help accelerate the measurement of antibiotic-induced killing. This study investigated whether ribosomal RNA (rRNA) could be suitable for this purpose. Cultures of penicillin-susceptible and penicillin-tolerant (Tol1 mutant) Streptococcus gordonii were exposed to mechanistically different penicillin and levofloxacin. Bacterial survival was assessed by viable counts, and compared to quantitative real-time PCR amplification of either the 16S-rRNA genes (rDNA) or the 16S rRNA, following reverse transcription. Penicillin-susceptible S. gordonii lost ≥ 4 log10 CFU/ml of viability over 48 h of penicillin treatment. In comparison, the Toll mutant lost ≤ 1 log10 CFU/ml. Amplification of a 427-base fragment of 16S rDNA yielded amplicons that increased proportionally to viable counts during bacterial growth, but did not decrease during drug-induced killing. In contrast, the same 427-base fragment amplified from 16S rDNA paralleled both bacterial growth and drug-induced killing. It also differentiated between penicillin-induced killing of the parent and the Toll mutant (≥4 log10 CFU/ml and ≤1 lo10 CFU/ml, respectively), and detected killing by mechanistically unrelated levofloxacin. Since large fragments of polynucleotides might be degraded faster than smaller fragments the experiments were repeated by amplifying a 119-base region internal to the origina1 427-base fragment. The amount of 119-base amplicons increased proportionally to viability during growth, but remained stable during drug treatment. Thus, 16S rRNA was a marker of antibiotic-induced killing, but the size of the amplified fragment was critical to differentiate between live and dead bacteria.
Resumo:
It is widely accepted that pharmacologic reduction of the blood pressure of hypertensive patients reduces the risk of at least some of the major cardiovascular complications (1-5). All major studies were carried out before orally active converting enzyme inhibitors had become available. In other words, very effective antihypertensive drugs have been around for quite some time and have already proven their efficacy. Therefore, the considerable enthusiasm that has developed during the very recent years for the new converting enzyme inhibitors should be evaluated in the light of previously available antihypertensive drugs, the more so, as drugs cheaper than converting enzyme inhibiting agents are presently available. Thus, the increased expense when using this new class of antihypertensive compounds should be justified by a therapeutic gain. When evaluating a class of antihypertensive drugs such as converting enzyme inhibitors, there are basically three main considerations: What is their efficacy in long-term use? This includes the effect on blood pressure, on heart, on hemodynamics, and on blood flow distribution. What are the metabolic effects? What is the effect on sodium and potassium excretion? How are the serum lipids affected by its use? Are there any untoward effects related either to the chemical structure of the compound per se or rather to the approach? In particular, are there any central effects of the drug which can cause discomfort to the patient? The following discussion has the principal aim to review these aspects with chronic use of oral converting enzyme inhibiting agents without, however, even attempting to provide an exhaustive review of the subject.
Resumo:
INTRODUCTION: The purpose of our study was to retrospectively evaluate the clinical and radiological results of subtrochanteric fractures treated with a long gamma nail (LGN). The LGN has been the implant of choice at our level-1 trauma center since 1992. MATERIALS AND METHODS: Over a period of 7 years, we have treated 90 consecutive patients with subtrochanteric fractures. In order to evaluate the clinical and radiological outcomes, we reviewed the clinical and radiographic charts of these patients followed for a mean time of 2 years (range 13-36 months). RESULTS: We found no intra- or perioperative complications nor early or late infection. Clinical and radiological union was achieved at a mean of 4.3 months in all of the patients (range 3-9 months); in 24 cases (30%) the distal locking bolts were retrieved in order to enhance callus formation and remodeling as a planned secondary surgery. Three patients (3.3%) needed unplanned secondary surgery for problems related to the nailing technique. Two mechanical failures with breakage of the nail were encountered due to proximal varus malalignment, of which one was treated with exchange nailing and grafting and the other one by removal of the broken hardware, blade-plating, and bone grafting. One fracture below a short LGN was treated by exchange nailing. CONCLUSIONS: The minimally invasive technique and simple application of the LGN lead to a low percentage of complications in these difficult fractures after a relatively short learning curve. The biomechanical properties of this implant allow early mobilization and partial weight-bearing even in patients with advanced osteoporosis.
Resumo:
The oncologic outcome and the total dose are highly correlated with the treatment by ionizing radiation. The dose increase (total or per fraction) may provoke late-side effects that are potentially irreversible. The radiation-induced CD8 lymphocyte apoptotic value and the molecular modifications within the lymphocyte are capable of predicting the level of risk of developing late-side effects after curative intent radiotherapy. In this review, we present the different blood assays in this setting and discuss the current possibilities of researches, namely those involving the proteomic process.