107 resultados para CONVENTIONAL ANTIPSYCHOTICS
Resumo:
BACKGROUND For patients with acute iliofemoral deep vein thrombosis, it remains unclear whether the addition of intravascular high-frequency, low-power ultrasound energy facilitates the resolution of thrombosis during catheter-directed thrombolysis. METHODS AND RESULTS In a controlled clinical trial, 48 patients (mean age 50±21 years, 52% women) with acute iliofemoral deep vein thrombosis were randomized to receive ultrasound-assisted catheter-directed thrombolysis (N=24) or conventional catheter-directed thrombolysis (N=24). Thrombolysis regimen (20 mg r-tPA over 15 hours) was identical in all patients. The primary efficacy end point was the percentage of thrombus load reduction from baseline to 15 hours according to the length-adjusted thrombus score, obtained from standardized venograms and evaluated by a core laboratory blinded to group assignment. The percentage of thrombus load reduction was 55%±27% in the ultrasound-assisted catheter-directed thrombolysis group and 54%±27% in the conventional catheter-directed thrombolysis group (P=0.91). Adjunctive angioplasty and stenting was performed in 19 (80%) patients and in 20 (83%) patients, respectively (P>0.99). Treatment-related complications occurred in 3 (12%) and 2 (8%) patients, respectively (P>0.99). At 3-month follow-up, primary venous patency was 100% in the ultrasound-assisted catheter-directed thrombolysis group and 96% in the conventional catheter-directed thrombolysis group (P=0.33), and there was no difference in the severity of the post-thrombotic syndrome (mean Villalta score: 3.0±3.9 [range 0-15] versus 1.9±1.9 [range 0-7]; P=0.21), respectively. CONCLUSIONS In this randomized controlled clinical trial of patients with acute iliofemoral deep vein thrombosis treated with a fixed-dose catheter thrombolysis regimen, the addition of intravascular ultrasound did not facilitate thrombus resolution. CLINICAL TRIAL REGISTRATION URL http://www.clinicaltrials.gov. Unique identifier: NCT01482273.
Resumo:
BACKGROUND Bolt-kit systems are increasingly used as an alternative to conventional external cerebrospinal fluid (CSF) drainage systems. Since 2009 we regularly utilize bolt-kit external ventricular drainage (EVD) systems with silver-bearing catheters inserted manually with a hand drill and skull screws for emergency ventriculostomy. For non-emergency situations, we use conventional ventriculostomy with subcutaneous tunneled silver-bearing catheters, performed in the operating room with a pneumatic drill. This retrospective analysis compared the two techniques in terms of infection rates. METHODS 152 patients (aged 17-85 years, mean=55.4 years) were included in the final analysis; 95 received bolt-kit silver-bearing catheters and 57 received conventionally implanted silver-bearing catheters. The primary endpoint combined infection parameters: occurrence of positive CSF culture, colonization of catheter tips, or elevated CSF white blood cell counts (>4/μl). Secondary outcome parameters were presence of microorganisms in CSF or on catheter tips. Incidence of increased CSF cell counts and number of patients with catheter malposition were also compared. RESULTS The primary outcome, defined as analysis of combined infection parameters (occurrence of either positive CSF culture, colonization of the catheter tips or raised CSF white blood cell counts >4/μl)was not significantly different between the groups (58.9% bolt-kit group vs. 63.2% conventionally implanted group, p=0.61, chi-square-test). The bolt-kit group was non-inferior and not superior to the conventional group (relative risk reduction of 6.7%; 90% confidence interval: -19.9% to 25.6%). Secondary outcomes showed no statistically significant difference in the incidence of microorganisms in CSF (2.1% bolt-kit vs. 5.3% conventionally implanted; p=0.30; chi-square-test). CONCLUSIONS This analysis indicates that silver-bearing EVD catheters implanted with a bolt-kit system outside the operating room do not significantly elevate the risk of CSF infection as compared to conventional implant methods.
Resumo:
PURPOSE Assessment of experience gained by local referring physicians with the procedure of coronary computed tomographic angiography (CCTA) in the everyday clinical routine. MATERIALS AND METHODS A 25-item questionnaire was sent to 179 physicians, who together had referred a total of 1986 patients for CCTA. They were asked about their experience to date with CCTA, the indications for coronary imaging, and their practice in referring patients for noninvasive CCTA or invasive catheter angiography. RESULTS 53 questionnaires (30 %) were assessable, corresponding to more than 72 % of the patients referred. Of the referring physicians who responded, 94 % saw a concrete advantage of CCTA in the treatment of patients, whereby 87 % were 'satisfied' or 'very satisfied' with the reporting. For excluding coronary heart disease (CHD) where there was a low pre-test probability of disease, the physicians considered CCTA to be superior to conventional coronary diagnosis (4.2 on a scale of 1 - 5) and vice versa for acute coronary syndrome (1.6 of 5). The main reasons for unsuitability of CCTA for CT diagnosis were claustrophobia and the absence of a sinus rhythm. The level of exposure to radiation in CCTA was estimated correctly by only 42 % of the referring physicians. 90 % of the physicians reported that their patients evaluated their coronary CT overall as 'positive' or 'neutral', while 87 % of the physicians whose patients had undergone both procedures reported that the patients had experienced CCTA as the less disagreeable of the two. CONCLUSION CCTA is accepted by the referring physicians as an alternative imaging procedure for the exclusion of CHD and received a predominantly positive assessment from both the referring physicians and the patients.
Resumo:
PURPOSE To compare postoperative morphological and rheological conditions after eversion carotid endarterectomy versus conventional carotid endarterectomy using computational fluid dynamics. BASIC METHODS Hemodynamic metrics (velocity, wall shear stress, time-averaged wall shear stress and temporal gradient wall shear stress) in the carotid arteries were simulated in one patient after conventional carotid endarterectomy and one patient after eversion carotid endarterectomy by computational fluid dynamics analysis based on patient specific data. PRINCIPAL FINDINGS Systolic peak of the eversion carotid endarterectomy model showed a gradually decreased pressure along the stream path, the conventional carotid endarterectomy model revealed high pressure (about 180 Pa) at the carotid bulb. Regions of low wall shear stress in the conventional carotid endarterectomy model were much larger than that in the eversion carotid endarterectomy model and with lower time-averaged wall shear stress values (conventional carotid endarterectomy: 0.03-5.46 Pa vs. eversion carotid endarterectomy: 0.12-5.22 Pa). CONCLUSIONS Computational fluid dynamics after conventional carotid endarterectomy and eversion carotid endarterectomy disclosed differences in hemodynamic patterns. Larger studies are necessary to assess whether these differences are consistent and might explain different rates of restenosis in both techniques.
Resumo:
BACKGROUND Staphylococcus aureus has long been recognized as a major pathogen. Methicillin-resistant strains of S. aureus (MRSA) and methicillin-resistant strains of S. epidermidis (MRSE) are among the most prevalent multiresistant pathogens worldwide, frequently causing nosocomial and community-acquired infections. METHODS In the present pilot study, we tested a polymerase chain reaction (PCR) method to quickly differentiate Staphylococci and identify the mecA gene in a clinical setting. RESULTS Compared to the conventional microbiology testing the real-time PCR assay had a higher detection rate for both S. aureus and coagulase-negative Staphylococci (CoNS; 55 vs. 32 for S. aureus and 63 vs. 24 for CoNS). Hands-on time preparing DNA, carrying out the PCR, and evaluating results was less than 5 h. CONCLUSIONS The assay is largely automated, easy to adapt, and has been shown to be rapid and reliable. Fast detection and differentiation of S. aureus, CoNS, and the mecA gene by means of this real-time PCR protocol may help expedite therapeutic decision-making and enable earlier adequate antibiotic treatment.
Resumo:
OBJECTIVE How long clinicians should wait before considering an antipsychotic ineffective and changing treatment in schizophrenia is an unresolved clinical question. Guidelines differ substantially in this regard. The authors conducted a diagnostic test meta-analysis using mostly individual patient data to assess whether lack of improvement at week 2 predicts later nonresponse. METHOD The search included EMBASE, MEDLINE, BIOSIS, PsycINFO, Cochrane Library, CINAHL, and reference lists of relevant articles, supplemented by requests to authors of all relevant studies. The main outcome was prediction of nonresponse, defined as <50% reduction in total score on either the Positive and Negative Syndrome Scale (PANSS) or Brief Psychiatric Rating Scale (BPRS) (corresponding to at least much improved) from baseline to endpoint (4-12 weeks), by <20% PANSS or BPRS improvement (corresponding to less than minimally improved) at week 2. Secondary outcomes were absent cross-sectional symptomatic remission and <20% PANSS or BPRS reduction at endpoint. Potential moderator variables were examined by meta-regression. RESULTS In 34 studies (N=9,460) a <20% PANSS or BPRS reduction at week 2 predicted nonresponse at endpoint with a specificity of 86% and a positive predictive value (PPV) of 90%. Using data for observed cases (specificity=86%, PPV=85%) or lack of remission (specificity=77%, PPV=88%) yielded similar results. Conversely, using the definition of <20% reduction at endpoint yielded worse results (specificity=70%, PPV=55%). The test specificity was significantly moderated by a trial duration of <6 weeks, higher baseline illness severity, and shorter illness duration. CONCLUSIONS Patients not even minimally improved by week 2 of antipsychotic treatment are unlikely to respond later and may benefit from a treatment change.
Advantages and controversies of depot antipsychotics in the treatment of patients with schizophrenia
Resumo:
BACKGROUND The objective of this article is to give an overview of the advantages and disadvantages of the use of depot antipsychotics in the treatment of schizophrenia. The focus is on efficacy, tolerability, relapse prevention, patient compliance and satisfaction compared to oral administration forms. MATERIAL AND METHODS A literature search was conducted in medical databases. The results of meta-analyses, randomized controlled trials and systematic reviews from the years 1999-2014 were included. RESULTS AND DISCUSSION Depot antipsychotics ensure maintenance of constant blood levels and a continuous medication delivery. The efficacy and tolerability of depot antipsychotics are comparable to oral administration forms. Due to an improved medication compliance a reduction of relapse and hospitalization rates can be achieved. This is a key focus for improving outcomes and reducing costs in the treatment of schizophrenia.
Resumo:
Importance In treatment-resistant schizophrenia, clozapine is considered the standard treatment. However, clozapine use has restrictions owing to its many adverse effects. Moreover, an increasing number of randomized clinical trials (RCTs) of other antipsychotics have been published. Objective To integrate all the randomized evidence from the available antipsychotics used for treatment-resistant schizophrenia by performing a network meta-analysis. Data Sources MEDLINE, EMBASE, Biosis, PsycINFO, PubMed, Cochrane Central Register of Controlled Trials, World Health Organization International Trial Registry, and clinicaltrials.gov were searched up to June 30, 2014. Study Selection At least 2 independent reviewers selected published and unpublished single- and double-blind RCTs in treatment-resistant schizophrenia (any study-defined criterion) that compared any antipsychotic (at any dose and in any form of administration) with another antipsychotic or placebo. Data Extraction and Synthesis At least 2 independent reviewers extracted all data into standard forms and assessed the quality of all included trials with the Cochrane Collaboration's risk-of-bias tool. Data were pooled using a random-effects model in a Bayesian setting. Main Outcomes and Measures The primary outcome was efficacy as measured by overall change in symptoms of schizophrenia. Secondary outcomes included change in positive and negative symptoms of schizophrenia, categorical response to treatment, dropouts for any reason and for inefficacy of treatment, and important adverse events. Results Forty blinded RCTs with 5172 unique participants (71.5% men; mean [SD] age, 38.8 [3.7] years) were included in the analysis. Few significant differences were found in all outcomes. In the primary outcome (reported as standardized mean difference; 95% credible interval), olanzapine was more effective than quetiapine (-0.29; -0.56 to -0.02), haloperidol (-0. 29; -0.44 to -0.13), and sertindole (-0.46; -0.80 to -0.06); clozapine was more effective than haloperidol (-0.22; -0.38 to -0.07) and sertindole (-0.40; -0.74 to -0.04); and risperidone was more effective than sertindole (-0.32; -0.63 to -0.01). A pattern of superiority for olanzapine, clozapine, and risperidone was seen in other efficacy outcomes, but results were not consistent and effect sizes were usually small. In addition, relatively few RCTs were available for antipsychotics other than clozapine, haloperidol, olanzapine, and risperidone. The most surprising finding was that clozapine was not significantly better than most other drugs. Conclusions and Relevance Insufficient evidence exists on which antipsychotic is more efficacious for patients with treatment-resistant schizophrenia, and blinded RCTs-in contrast to unblinded, randomized effectiveness studies-provide little evidence of the superiority of clozapine compared with other second-generation antipsychotics. Future clozapine studies with high doses and patients with extremely treatment-refractory schizophrenia might be most promising to change the current evidence.
Resumo:
BACKGROUND Endodontic treatment involves removal of the dental pulp and its replacement by a root canal filling. Restoration of root filled teeth can be challenging due to structural differences between vital and non-vital root-filled teeth. Direct restoration involves placement of a restorative material e.g. amalgam or composite, directly into the tooth. Indirect restorations consist of cast metal or ceramic (porcelain) crowns. The choice of restoration depends on the amount of remaining tooth, and may influence durability and cost. The decision to use a post and core in addition to the crown is clinician driven. The comparative clinical performance of crowns or conventional fillings used to restore root-filled teeth is unknown. This review updates the original, which was published in 2012. OBJECTIVES To assess the effects of restoration of endodontically treated teeth (with or without post and core) by crowns versus conventional filling materials. SEARCH METHODS We searched the following databases: the Cochrane Oral Health Group's Trials Register, CENTRAL, MEDLINE via OVID, EMBASE via OVID, CINAHL via EBSCO, LILACS via BIREME. We also searched the reference lists of articles and ongoing trials registries.There were no restrictions regarding language or date of publication. The search is up-to-date as of 26 March 2015. SELECTION CRITERIA Randomised controlled trials (RCTs) or quasi-randomised controlled trials in participants with permanent teeth that have undergone endodontic treatment. Single full coverage crowns compared with any type of filling materials for direct restoration or indirect partial restorations (e.g. inlays and onlays). Comparisons considered the type of post and core used (cast or prefabricated post), if any. DATA COLLECTION AND ANALYSIS Two review authors independently extracted data from the included trial and assessed its risk of bias. We carried out data analysis using the 'treatment as allocated' patient population, expressing estimates of intervention effect for dichotomous data as risk ratios, with 95% confidence intervals (CI). MAIN RESULTS We included one trial, which was judged to be at high risk of performance, detection and attrition bias. The 117 participants with a root-filled, premolar tooth restored with a carbon fibre post, were randomised to either a full coverage metal-ceramic crown or direct adhesive composite restoration. None experienced a catastrophic failure (i.e. when the restoration cannot be repaired), although only 104 teeth were included in the final, three-year assessment. There was no clear difference between the crown and composite group and the composite only group for non-catastrophic failures of the restoration (1/54 versus 3/53; RR 0.33; 95% CI 0.04 to 3.05) or failures of the post (2/54 versus 1/53; RR 1.96; 95% CI 0.18 to 21.01) at three years. The quality of the evidence for these outcomes is very low. There was no evidence available for any of our secondary outcomes: patient satisfaction and quality of life, incidence or recurrence of caries, periodontal health status, and costs. AUTHORS' CONCLUSIONS There is insufficient evidence to assess the effects of crowns compared to conventional fillings for the restoration of root-filled teeth. Until more evidence becomes available, clinicians should continue to base decisions about how to restore root-filled teeth on their own clinical experience, whilst taking into consideration the individual circumstances and preferences of their patients.
Resumo:
OBJECTIVE To compare the accuracy of radiography and computed tomography (CT) in predicting implant position in relation to the vertebral canal in the cervical and thoracolumbar vertebral column. STUDY DESIGN In vitro imaging and anatomic study. ANIMALS Medium-sized canine cadaver vertebral columns (n=12). METHODS Steinmann pins were inserted into cervical and thoracolumbar vertebrae based on established landmarks but without predetermination of vertebral canal violation. Radiographs and CT images were obtained and evaluated by 6 individuals. A random subset of pins was evaluated for ability to distinguish left from right pins on radiographs. The ability to correctly identify vertebral canal penetration for all pins was assessed both on radiographs and CT. Spines were then anatomically prepared and visual examination of pin penetration into the canal served as the gold standard. RESULTS Left/right accuracy was 93.1%. Overall sensitivity of radiographs and CT to detect vertebral canal penetration by an implant were significantly different and estimated as 50.7% and 93.4%, respectively (P<.0001). Sensitivity was significantly higher for complete versus partial penetration and for radiologists compared with nonradiologists for both imaging modalities. Overall specificity of radiographs and CT to detect vertebral canal penetration was 82.9% and 86.4%, respectively (P=.049). CONCLUSIONS CT was superior to radiographic assessment and is the recommended imaging modality to assess penetration into the vertebral canal. CLINICAL RELEVANCE CT is significantly more accurate in identifying vertebral canal violation by Steinmann pins and should be performed postoperatively to assess implant position.
Resumo:
PURPOSE To compare time-efficiency in the production of implant crowns using a digital workflow versus the conventional pathway. MATERIALS AND METHODS This prospective clinical study used a crossover design that included 20 study participants receiving single-tooth replacements in posterior sites. Each patient received a customized titanium abutment plus a computer-aided design/computer-assisted manufacture (CAD/CAM) zirconia suprastructure (for those in the test group, using digital workflow) and a standardized titanium abutment plus a porcelain-fused-to-metal crown (for those in the control group, using a conventional pathway). The start of the implant prosthetic treatment was established as the baseline. Time-efficiency analysis was defined as the primary outcome, and was measured for every single clinical and laboratory work step in minutes. Statistical analysis was calculated with the Wilcoxon rank sum test. RESULTS All crowns could be provided within two clinical appointments, independent of the manufacturing process. The mean total production time, as the sum of clinical plus laboratory work steps, was significantly different. The mean ± standard deviation (SD) time was 185.4 ± 17.9 minutes for the digital workflow process and 223.0 ± 26.2 minutes for the conventional pathway (P = .0001). Therefore, digital processing for overall treatment was 16% faster. Detailed analysis for the clinical treatment revealed a significantly reduced mean ± SD chair time of 27.3 ± 3.4 minutes for the test group compared with 33.2 ± 4.9 minutes for the control group (P = .0001). Similar results were found for the mean laboratory work time, with a significant decrease of 158.1 ± 17.2 minutes for the test group vs 189.8 ± 25.3 minutes for the control group (P = .0001). CONCLUSION Only a few studies have investigated efficiency parameters of digital workflows compared with conventional pathways in implant dental medicine. This investigation shows that the digital workflow seems to be more time-efficient than the established conventional production pathway for fixed implant-supported crowns. Both clinical chair time and laboratory manufacturing steps could be effectively shortened with the digital process of intraoral scanning plus CAD/CAM technology.
Resumo:
OBJECTIVES The aim of this prospective cohort trial was to perform a cost/time analysis for implant-supported single-unit reconstructions in the digital workflow compared to the conventional pathway. MATERIALS AND METHODS A total of 20 patients were included for rehabilitation with 2 × 20 implant crowns in a crossover study design and treated consecutively each with customized titanium abutments plus CAD/CAM-zirconia-suprastructures (test: digital) and with standardized titanium abutments plus PFM-crowns (control conventional). Starting with prosthetic treatment, analysis was estimated for clinical and laboratory work steps including measure of costs in Swiss Francs (CHF), productivity rates and cost minimization for first-line therapy. Statistical calculations were performed with Wilcoxon signed-rank test. RESULTS Both protocols worked successfully for all test and control reconstructions. Direct treatment costs were significantly lower for the digital workflow 1815.35 CHF compared to the conventional pathway 2119.65 CHF [P = 0.0004]. For subprocess evaluation, total laboratory costs were calculated as 941.95 CHF for the test group and 1245.65 CHF for the control group, respectively [P = 0.003]. The clinical dental productivity rate amounted to 29.64 CHF/min (digital) and 24.37 CHF/min (conventional) [P = 0.002]. Overall, cost minimization analysis exhibited an 18% cost reduction within the digital process. CONCLUSION The digital workflow was more efficient than the established conventional pathway for implant-supported crowns in this investigation.
Resumo:
BACKGROUND The study was designed to compare the effect of in vitro FSH stimulation on the hormone production and gene expression profile of granulosa cells (GCs) isolated from single naturally matured follicles obtained from natural cycle in vitro fertilization (NC-IVF) with granulosa cells obtained from conventional gonadotropin-stimulated IVF (c-IVF). METHODS Lutein granulosa cells from the dominant follicle were isolated and cultured in absence or presence of recombinant FSH. The cultures were run for 48 h and six days. Messenger RNA (mRNA) expressions of anti-Müllerian hormone (AMH) and FSH receptor were measured by quantitative polymerase chain reaction (qPCR). AMH protein and progesterone concentration (P4) in cultured supernatant were measured by ELISA and RIA. RESULTS Our results showed that the mRNA expression of AMH was significantly higher in GCs from NC- than from c-IVF on day 6 after treatment with FSH (1 IU/mL). The FSH stimulation increased the concentration of AMH in the culture supernatant of GCs from NC-IVF compared with cells from c-IVF. In the culture medium, the AMH level was correlated significantly and positively to progesterone concentration. CONCLUSIONS Differences in the levels of AMH and progesterone released into the medium by cultured GC as well as in AMH gene expression were observed between GCs obtained under natural and stimulated IVF protocols. The results suggest that artificial gonadotropin stimulation may have an effect on the intra-follicular metabolism. A significant positive correlation between AMH and progesterone may suggest progesterone as a factor influencing AMH secretion.
Resumo:
Accurate rainfall data are the key input parameter for modelling river discharge and soil loss. Remote areas of Ethiopia often lack adequate precipitation data and where these data are available, there might be substantial temporal or spatial gaps. To counter this challenge, the Climate Forecast System Reanalysis (CFSR) of the National Centers for Environmental Prediction (NCEP) readily provides weather data for any geographic location on earth between 1979 and 2014. This study assesses the applicability of CFSR weather data to three watersheds in the Blue Nile Basin in Ethiopia. To this end, the Soil and Water Assessment Tool (SWAT) was set up to simulate discharge and soil loss, using CFSR and conventional weather data, in three small-scale watersheds ranging from 112 to 477 ha. Calibrated simulation results were compared to observed river discharge and observed soil loss over a period of 32 years. The conventional weather data resulted in very good discharge outputs for all three watersheds, while the CFSR weather data resulted in unsatisfactory discharge outputs for all of the three gauging stations. Soil loss simulation with conventional weather inputs yielded satisfactory outputs for two of three watersheds, while the CFSR weather input resulted in three unsatisfactory results. Overall, the simulations with the conventional data resulted in far better results for discharge and soil loss than simulations with CFSR data. The simulations with CFSR data were unable to adequately represent the specific regional climate for the three watersheds, performing even worse in climatic areas with two rainy seasons. Hence, CFSR data should not be used lightly in remote areas with no conventional weather data where no prior analysis is possible.