36 resultados para Non-clinical activities

em Université de Lausanne, Switzerland


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVE: To describe food habits and dietary intakes of athletic and non-athletic adolescents in Switzerland. SETTING: College, high schools and professional centers in the Swiss canton of Vaud. METHOD: A total of 3,540 subjects aged 9-19 y answered a self-reported anonymous questionnaire to assess lifestyles, physical plus sports activity and food habits. Within this sample, a subgroup of 246 subjects aged 11-15 also participated in an in-depth ancillary study including a 3 day dietary record completed by an interview with a dietician. RESULTS: More boys than girls reported engaging in regular sports activities (P<0.001). Adolescent food habits are quite traditional: up to 15 y, most of the respondents have a breakfast and eat at least two hot meals a day, the percentages decreasing thereafter. Snacking is widespread among adolescents (60-80% in the morning, 80-90% in the afternoon). Food habits among athletic adolescents are healthier and also are perceived as such in a higher proportion. Among athletic adolescents, consumption frequency is higher for dairy products and ready to eat (RTE) cereals, for fruit, fruit juices and salad (P<0.05 at least). Thus the athletic adolescent's food brings more micronutrients than the diet of their non-athletic counterparts. Within the subgroup (ancillary study), mean energy intake corresponds to requirements for age/gender group. CONCLUSIONS: Athletic adolescents display healthier food habits than non-athletic adolescents: this result supports the idea that healthy behavior tends to cluster and suggests that prevention programs among this age group should target simultaneously both sports activity and food habits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La douleur est fréquente en milieu de soins intensifs et sa gestion est l'une des missions des infirmières. Son évaluation est une prémisse indispensable à son soulagement. Cependant lorsque le patient est incapable de signaler sa douleur, les infirmières doivent se baser sur des signes externes pour l'évaluer. Les guides de bonne pratique recommandent chez les personnes non communicantes l'usage d'un instrument validé pour la population donnée et basé sur l'observation des comportements. A l'heure actuelle, les instruments d'évaluation de la douleur disponibles ne sont que partiellement adaptés aux personnes cérébrolésées dans la mesure où ces personnes présentent des comportements qui leur sont spécifiques. C'est pourquoi, cette étude vise à identifier, décrire et valider des indicateurs, et des descripteurs, de la douleur chez les personnes cérébrolésées. Un devis d'étude mixte multiphase avec une dominante quantitative a été choisi pour cette étude. Une première phase consistait à identifier des indicateurs et des descripteurs de la douleur chez les personnes cérébrolésées non communicantes aux soins intensifs en combinant trois sources de données : une revue intégrative des écrits, une démarche consultative utilisant la technique du groupe nominal auprès de 18 cliniciens expérimentés (6 médecins et 12 infirmières) et les résultats d'une étude pilote observationnelle réalisée auprès de 10 traumatisés crâniens. Les résultats ont permis d'identifier 6 indicateurs et 47 descripteurs comportementaux, vocaux et physiologiques susceptibles d'être inclus dans un instrument d'évaluation de la douleur destiné aux personnes cérébrolésées non- communicantes aux soins intensifs. Une deuxième phase séquentielle vérifiait les propriétés psychométriques des indicateurs et des descripteurs préalablement identifiés. La validation de contenu a été testée auprès de 10 experts cliniques et 4 experts scientifiques à l'aide d'un questionnaire structuré qui cherchait à évaluer la pertinence et la clarté/compréhensibilité de chaque descripteur. Cette démarche a permis de sélectionner 33 des 47 descripteurs et valider 6 indicateurs. Dans un deuxième temps, les propriétés psychométriques de ces indicateurs et descripteurs ont été étudiés au repos, lors de stimulation non nociceptive et lors d'une stimulation nociceptive (la latéralisation du patient) auprès de 116 personnes cérébrolésées aux soins intensifs hospitalisées dans deux centres hospitaliers universitaires. Les résultats montrent d'importantes variations dans les descripteurs observés lors de stimulation nociceptive probablement dues à l'hétérogénéité des patients au niveau de leur état de conscience. Dix descripteurs ont été éliminés, car leur fréquence lors de la stimulation nociceptive était inférieure à 5% ou leur fiabilité insuffisante. Les descripteurs physiologiques ont tous été supprimés en raison de leur faible variabilité et d'une fiabilité inter juge problématique. Les résultats montrent que la validité concomitante, c'est-à-dire la corrélation entre l'auto- évaluation du patient et les mesures réalisées avec les descripteurs, est satisfaisante lors de stimulation nociceptive {rs=0,527, p=0,003, n=30). Par contre la validité convergente, qui vérifiait l'association entre l'évaluation de la douleur par l'infirmière en charge du patient et les mesures réalisés avec les descripteurs, ainsi que la validité divergente, qui vérifiait si les indicateurs discriminent entre la stimulation nociceptive et le repos, mettent en évidence des résultats variables en fonction de l'état de conscience des patients. Ces résultats soulignent la nécessité d'étudier les descripteurs de la douleur chez des patients cérébrolésés en fonction du niveau de conscience et de considérer l'hétérogénéité de cette population dans la conception d'un instrument d'évaluation de la douleur pour les personnes cérébrolésées non communicantes aux soins intensifs. - Pain is frequent in the intensive care unit (ICU) and its management is a major issue for nurses. The assessment of pain is a prerequisite for appropriate pain management. However, pain assessment is difficult when patients are unable to communicate about their experience and nurses have to base their evaluation on external signs. Clinical practice guidelines highlight the need to use behavioral scales that have been validated for nonverbal patients. Current behavioral pain tools for ICU patients unable to communicate may not be appropriate for nonverbal brain-injured ICU patients, as they demonstrate specific responses to pain. This study aimed to identify, describe and validate pain indicators and descriptors in brain-injured ICU patients. A mixed multiphase method design with a quantitative dominant was chosen for this study. The first phase aimed to identify indicators and descriptors of pain for nonverbal brain- injured ICU patients using data from three sources: an integrative literature review, a consultation using the nominal group technique with 18 experienced clinicians (12 nurses and 6 physicians) and the results of an observational pilot study with 10 traumatic brain injured patients. The results of this first phase identified 6 indicators and 47 behavioral, vocal and physiological descriptors of pain that could be included in a pain assessment tool for this population. The sequential phase two tested the psychometric properties of the list of previously identified indicators and descriptors. Content validity was tested with 10 clinical and 4 scientific experts for pertinence and comprehensibility using a structured questionnaire. This process resulted in 33 descriptors to be selected out of 47 previously identified, and six validated indicators. Then, the psychometric properties of the descriptors and indicators were tested at rest, during non nociceptive stimulation and nociceptive stimulation (turning) in a sample of 116 brain-injured ICLI patients who were hospitalized in two university centers. Results showed important variations in the descriptors observed during the nociceptive stimulation, probably due to the heterogeneity of patients' level of consciousness. Ten descriptors were excluded, as they were observed less than 5% of the time or their reliability was insufficient. All physiologic descriptors were deleted as they showed little variability and inter observer reliability was lacking. Concomitant validity, testing the association between patients' self report of pain and measures performed using the descriptors, was acceptable during nociceptive stimulation (rs=0,527, p=0,003, n=30). However, convergent validity ( testing for an association between the nurses' pain assessment and measures done with descriptors) and divergent validity (testing for the ability of the indicators to discriminate between rest and a nociceptive stimulation) varied according to the level of consciousness These results highlight the need to study pain descriptors in brain-injured patients with different level of consciousness and to take into account the heterogeneity of this population forthe conception of a pain assessment tool for nonverbal brain-injured ICU patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: First hospitalisation for a psychotic episode causes intense distress to patients and families, but offers an opportunity to make a diagnosis and start treatment. However, linkage to outpatient psychiatric care remains a notoriously difficult step for young psychotic patients, who frequently interrupt treatment after hospitalisation. Persistence of symptoms, and untreated psychosis may therefore remain a problem despite hospitalisation and proper diagnosis. With persisting psychotic symptoms, numerous complications may arise: breakdown in relationships, loss of family and social support, loss of employment or study interruption, denial of disease, depression, suicide, substance abuse and violence. Understanding mechanisms that might promote linkage to outpatient psychiatric care is therefore a critical issue, especially in early intervention in psychotic disorders. OBJECTIVE: To study which factors hinder or promote linkage of young psychotic patients to outpatient psychiatric care after a first hospitalisation, in the absence of a vertically integrated program for early psychosis. Method. File audit study of all patients aged 18 to 30 who were admitted for the first time to the psychiatric University Hospital of Lausanne in the year 2000. For statistical analysis, chi2 tests were used for categorical variables and t-test for dimensional variables; p<0.05 was considered as statistically significant. RESULTS: 230 patients aged 18 to 30 were admitted to the Lausanne University psychiatric hospital for the first time during the year 2000, 52 of them with a diagnosis of psychosis (23%). Patients with psychosis were mostly male (83%) when compared with non-psychosis patients (49%). Furthermore, they had (1) 10 days longer mean duration of stay (24 vs 14 days), (2) a higher rate of compulsory admissions (53% vs 22%) and (3) were more often hospitalised by a psychiatrist rather than by a general practitioner (83% vs 53%). Other socio-demographic and clinical features at admission were similar in the two groups. Among the 52 psychotic patients, 10 did not stay in the catchment area for subsequent treatment. Among the 42 psychotic patients who remained in the catchment area after discharge, 20 (48%) did not attend the scheduled or rescheduled outpatient appointment. None of the socio demographic characteristics were associated with attendance to outpatient appointments. On the other hand, voluntary admission and suicidal ideation before admission were significantly related to attending the initial appointment. Moreover, some elements of treatment seemed to be associated with higher likelihood to attend outpatient treatment: (1) provision of information to the patient regarding diagnosis, (2) discussion about the treatment plan between in- and outpatient staff, (3) involvement of outpatient team during hospitalisation, and (4) elaboration of concrete strategies to face basic needs, organise daily activities or education and reach for help in case of need. CONCLUSION: As in other studies, half of the patients admitted for a first psychotic episode failed to link to outpatient psychiatric care. Our study suggests that treatment rather than patient's characteristics play a critical role in this phenomenon. Development of a partnership and involvement of patients in the decision process, provision of good information regarding the illness, clear definition of the treatment plan, development of concrete strategies to cope with the illness and its potential complications, and involvement of the outpatient treating team already during hospitalisation, all came out as critical strategies to facilitate adherence to outpatient care. While the current rate of disengagement after admission is highly concerning, our finding are encouraging since they constitute strategies that can easily be implemented. An open approach to psychosis, the development of partnership with patients and a better coordination between inpatient and outpatient teams should therefore be among the targets of early intervention programs. These observations might help setting up priorities when conceptualising new programs and facilitate the implementation of services that facilitate engagement of patients in treatment during the critical initial phase of psychotic disorders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Patients with a solid organ transplant have increased in numbers and in individual survival in Switzerland over the last decades. As a consequence of long-term immunosuppression, skin cancer in solid organ recipients (SOTRs) has been recognized as an important problem. Screening and education of potential SOTRs about prevention of sun damage and early recognition of skin cancer are important before transplantation. Once transplanted, SOTRs should be seen by a dermatologist yearly for repeat education as well as early diagnosis, prevention and treatment of skin cancer. Squamous cell carcinoma of the skin (SCC) is the most frequent cancer in the setting of long-term immunosuppression. Sun protection by behaviour, clothing and daily sun screen application is the most effective prevention. Cumulative sun damage results in field cancerisation with numerous in-situ SCC such as actinic keratosis and Bowen's disease which should be treated proactively. Invasive SCC is cured by complete surgical excision. Early removal is the best precaution against potential metastases of SCC. Reduction of immunosuppression and switch to mTOR inhibitors and potentially, mycophenolate, may reduce the incidence of further SCC. Chemoprevention with the retinoid acitretin reduces the recurrence rate of SCC. The dermatological follow-up of SOTRs should be integrated into the comprehensive post-transplant care.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE: Accurate placement of an external ventricular drain (EVD) for the treatment of hydrocephalus is of paramount importance for its functionality and in order to minimize morbidity and complications. The aim of this study was to compare two different drain insertion assistance tools with the traditional free-hand anatomical landmark method, and to measure efficacy, safety and precision. METHODS: Ten cadaver heads were prepared by opening large bone windows centered on Kocher's points on both sides. Nineteen physicians, divided in two groups (trainees and board certified neurosurgeons) performed EVD insertions. The target for the ventricular drain tip was the ipsilateral foramen of Monro. Each participant inserted the external ventricular catheter in three different ways: 1) free-hand by anatomical landmarks, 2) neuronavigation-assisted (NN), and 3) XperCT-guided (XCT). The number of ventricular hits and dangerous trajectories; time to proceed; radiation exposure of patients and physicians; distance of the catheter tip to target and size of deviations projected in the orthogonal plans were measured and compared. RESULTS: Insertion using XCT increased the probability of ventricular puncture from 69.2 to 90.2 % (p = 0.02). Non-assisted placements were significantly less precise (catheter tip to target distance 14.3 ± 7.4 mm versus 9.6 ± 7.2 mm, p = 0.0003). The insertion time to proceed increased from 3.04 ± 2.06 min. to 7.3 ± 3.6 min. (p < 0.001). The X-ray exposure for XCT was 32.23 mSv, but could be reduced to 13.9 mSv if patients were initially imaged in the hybrid-operating suite. No supplementary radiation exposure is needed for NN if patients are imaged according to a navigation protocol initially. CONCLUSION: This ex vivo study demonstrates a significantly improved accuracy and safety using either NN or XCT-assisted methods. Therefore, efforts should be undertaken to implement these new technologies into daily clinical practice. However, the accuracy versus urgency of an EVD placement has to be balanced, as the image-guided insertion technique will implicate a longer preparation time due to a specific image acquisition and trajectory planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The trabecular bone score (TBS, Med-Imaps, Pessac, France) is an index of bone microarchitecture texture extracted from anteroposterior dual-energy X-ray absorptiometry images of the spine. Previous studies have documented the ability of TBS of the spine to differentiate between women with and without fractures among age- and areal bone mineral density (aBMD)-matched controls, as well as to predict future fractures. In this cross-sectional analysis of data collected from 3 geographically dispersed facilities in the United States, we investigated age-related changes in the microarchitecture of lumbar vertebrae as assessed by TBS in a cohort of non-Hispanic US white American women. All subjects were 30 yr of age and older and had an L1-L4aBMDZ-score within ±2 SD of the population mean. Individuals were excluded if they had fractures, were on any osteoporosis treatment, or had any illness that would be expected to impact bone metabolism. All data were extracted from Prodigy dual-energy X-ray absorptiometry devices (GE-Lunar, Madison, WI). Cross-calibrations between the 3 participating centers were performed for TBS and aBMD. aBMD and TBS were evaluated for spine L1-L4 but also for all other possible vertebral combinations. To validate the cohort, a comparison between the aBMD normative data of our cohort and US non-Hispanic white Lunar data provided by the manufacturer was performed. A database of 619 non-Hispanic US white women, ages 30-90 yr, was created. aBMD normative data obtained from this cohort were not statistically different from the non-Hispanic US white Lunar normative data provided by the manufacturer (p = 0.30). This outcome thereby indirectly validates our cohort. TBS values at L1-L4 were weakly inversely correlated with body mass index (r = -0.17) and weight (r = -0.16) and not correlated with height. TBS values for all lumbar vertebral combinations decreased significantly with age. There was a linear decrease of 16.0% (-2.47 T-score) in TBS at L1-L4 between 45 and 90 yr of age (vs. -2.34 for aBMD). Microarchitectural loss rate increased after age 65 by 50% (-0.004 to -0.006). Similar results were obtained for other combinations of lumbar vertebra. TBS, an index of bone microarchitectural texture, decreases with advancing age in non-Hispanic US white women. Little change in TBS is observed between ages 30 and 45. Thereafter, a progressive decrease is observed with advancing age. The changes we observed in these American women are similar to that previously reported for a French population of white women (r(2) > 0.99). This reference database will facilitate the use of TBS to assess bone microarchitectural deterioration in clinical practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The synthesis of published research in systematic reviews is essential when providing evidence to inform clinical and health policy decision-making. However, the validity of systematic reviews is threatened if journal publications represent a biased selection of all studies that have been conducted (dissemination bias). To investigate the extent of dissemination bias we conducted a systematic review that determined the proportion of studies published as peer-reviewed journal articles and investigated factors associated with full publication in cohorts of studies (i) approved by research ethics committees (RECs) or (ii) included in trial registries. METHODS AND FINDINGS: Four bibliographic databases were searched for methodological research projects (MRPs) without limitations for publication year, language or study location. The searches were supplemented by handsearching the references of included MRPs. We estimated the proportion of studies published using prediction intervals (PI) and a random effects meta-analysis. Pooled odds ratios (OR) were used to express associations between study characteristics and journal publication. Seventeen MRPs (23 publications) evaluated cohorts of studies approved by RECs; the proportion of published studies had a PI between 22% and 72% and the weighted pooled proportion when combining estimates would be 46.2% (95% CI 40.2%-52.4%, I2 = 94.4%). Twenty-two MRPs (22 publications) evaluated cohorts of studies included in trial registries; the PI of the proportion published ranged from 13% to 90% and the weighted pooled proportion would be 54.2% (95% CI 42.0%-65.9%, I2 = 98.9%). REC-approved studies with statistically significant results (compared with those without statistically significant results) were more likely to be published (pooled OR 2.8; 95% CI 2.2-3.5). Phase-III trials were also more likely to be published than phase II trials (pooled OR 2.0; 95% CI 1.6-2.5). The probability of publication within two years after study completion ranged from 7% to 30%. CONCLUSIONS: A substantial part of the studies approved by RECs or included in trial registries remains unpublished. Due to the large heterogeneity a prediction of the publication probability for a future study is very uncertain. Non-publication of research is not a random process, e.g., it is associated with the direction of study findings. Our findings suggest that the dissemination of research findings is biased.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Emergency room reading performances have been a point of interest in recent studies comparing radiologists to other physician groups. Our objective was to evaluate and compare the reading performances of radiologists and surgeons in an emergency room setting of non-traumatic abdominal CTs. Methods and materials: A total of ten readers representing four groups participated in this study: three senior radiologists and visceral surgeons, respectively, and two junior radiologists and surgeons, respectively. Each observer blindedly evaluated a total of 150 multi-slice acute abdominal CTs. CTs were chosen representing established proportions of acute abdomen pathologies in a Level I trauma centre from 2003 to 2005. Each answer was interpretated as right or wrong regarding pathology location, diagnosis and need for operation. Gold standard was the intraoperative result, and the clinical patient follow-up for non-operated patients. Significance was assumed at a p <.05 level. Results: Senior radiologists had a mean score of 2.38 ± 1.14, junior radiologists a score of 2.34 ± 1.14, whereas senior surgeons scored 2.07 ± 1.30 and junior surgeons 1.62 ± 1.42. No significant difference was found between the two radiologist groups, but results were significantly better for senior surgeons as compared to junior surgeons and better for the two radiologist groups as compared to each of the surgeon groups (all p <.05). Conclusion: Abdominal CT reading in an acute abdomen setting should continue to rely on an evaluation by a radiologist, whether senior or junior. Satisfying reading results can be achieved by senior visceral surgeons, but junior surgeons need more experience for a good reading performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since 1990, several techniques have been developed to photochemically inactivate pathogens in platelet concentrates, potentially leading to safer transfusion therapy. The three most common methods are amotosalen/UVA (INTERCEPT Blood System), riboflavin/UVA-UVB (MIRASOL PRT), and UVC (Theraflex-UV). We review the biology of pathogen inactivation methods, present their efficacy in reducing pathogens, discuss their impact on the functional aspects of treated platelets, and review clinical studies showing the clinical efficiency of the pathogen inactivation methods and their possible toxicity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We sought to provide a contemporary picture of the presentation, etiology, and outcome of infective endocarditis (IE) in a large patient cohort from multiple locations worldwide. Prospective cohort study of 2781 adults with definite IE who were admitted to 58 hospitals in 25 countries from June 1, 2000, through September 1, 2005. The median age of the cohort was 57.9 (interquartile range, 43.2-71.8) years, and 72.1% had native valve IE. Most patients (77.0%) presented early in the disease (<30 days) with few of the classic clinical hallmarks of IE. Recent health care exposure was found in one-quarter of patients. Staphylococcus aureus was the most common pathogen (31.2%). The mitral (41.1%) and aortic (37.6%) valves were infected most commonly. The following complications were common: stroke (16.9%), embolization other than stroke (22.6%), heart failure (32.3%), and intracardiac abscess (14.4%). Surgical therapy was common (48.2%), and in-hospital mortality remained high (17.7%). Prosthetic valve involvement (odds ratio, 1.47; 95% confidence interval, 1.13-1.90), increasing age (1.30; 1.17-1.46 per 10-year interval), pulmonary edema (1.79; 1.39-2.30), S aureus infection (1.54; 1.14-2.08), coagulase-negative staphylococcal infection (1.50; 1.07-2.10), mitral valve vegetation (1.34; 1.06-1.68), and paravalvular complications (2.25; 1.64-3.09) were associated with an increased risk of in-hospital death, whereas viridans streptococcal infection (0.52; 0.33-0.81) and surgery (0.61; 0.44-0.83) were associated with a decreased risk. In the early 21st century, IE is more often an acute disease, characterized by a high rate of S aureus infection. Mortality remains relatively high.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Ability to work and live independently is of particular concern for patients with Parkinson's disease (PD). We studied a series of PD patients able to work or live independently at baseline, and evaluated potential risk factors for two separate outcomes: loss of ability to work and loss of ability to live independently. METHODS: The series comprised 495 PD patients followed prospectively. Ability to work and ability to live independently were based on clinical interview and examination. Cox regression models adjusted for age and disease duration were used to evaluate associations of baseline characteristics with loss of ability to work and loss of ability to live independently. RESULTS: Higher UPDRS dyskinesia score, UPDRS instability score, UPDRS total score, Hoehn and Yahr stage, and presence of intellectual impairment at baseline were all associated with increased risk of future loss of ability to work and loss of ability to live independently (P ≤ 0.0033). Five years after initial visit, for patients ≤70 years of age with a disease duration ≤4 years at initial visit, 88% were still able to work and 90% to live independently. These estimates worsened as age and disease duration at initial visit increased; for patients >70 years of age with a disease duration >4 years, estimates at 5 years were 43% able to work and 57% able to live independently. CONCLUSIONS: The information provided in this study can offer useful information for PD patients in preparing for future ability to perform activities of daily living.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

QUESTION UNDER STUDY: To assess which high-risk acute coronary syndrome (ACS) patient characteristics played a role in prioritising access to intensive care unit (ICU), and whether introducing clinical practice guidelines (CPG) explicitly stating ICU admission criteria altered this practice. PATIENTS AND METHODS: All consecutive patients with ACS admitted to our medical emergency centre over 3 months before and after CPG implementation were prospectively assessed. The impact of demographic and clinical characteristics (age, gender, cardiovascular risk factors, and clinical parameters upon admission) on ICU hospitalisation of high-risk patients (defined as retrosternal pain of prolonged duration with ECG changes and/or positive troponin blood level) was studied by logistic regression. RESULTS: Before and after CPG implementation, 328 and 364 patients, respectively, were assessed for suspicion of ACS. Before CPG implementation, 36 of the 81 high-risk patients (44.4%) were admitted to ICU. After CPG implementation, 35 of the 90 high-risk patients (38.9%) were admitted to ICU. Male patients were more frequently admitted to ICU before CPG implementation (OR=7.45, 95% CI 2.10-26.44), but not after (OR=0.73, 95% CI 0.20-2.66). Age played a significant role in both periods (OR=1.57, 95% CI 1.24-1.99), both young and advanced ages significantly reducing ICU admission, but to a lesser extent after CPG implementation. CONCLUSION: Prioritisation of access to ICU for high-risk ACS patients was age-dependent, but focused on the cardiovascular risk factor profile. CPG implementation explicitly stating ICU admission criteria decreased discrimination against women, but other factors are likely to play a role in bed allocation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To investigate the safety and efficacy of 50-Hz repetitive transcranial magnetic stimulation (rTMS) in the treatment of motor symptoms in Parkinson disease (PD). BACKGROUND: Progression of PD is characterized by the emergence of motor deficits that gradually respond less to dopaminergic therapy. rTMS has shown promising results in improving gait, a major cause of disability, and may provide a therapeutic alternative. Prior controlled studies suggest that an increase in stimulation frequency might enhance therapeutic efficacy. METHODS: In this randomized, double blind, sham-controlled study, the authors investigated the safety and efficacy of 50-Hz rTMS of the motor cortices in 8 sessions over 2 weeks. Assessment of safety and clinical efficacy over a 1-month period included timed tests of gait and bradykinesia, Unified Parkinson's Disease Rating Scale (UPDRS), and additional clinical, neurophysiological, and neuropsychological parameters. In addition, the safety of 50-Hz rTMS was tested with electromyography-electroencephalogram (EMG-EEG) monitoring during and after stimulation. RESULTS: The authors investigated 26 patients with mild to moderate PD: 13 received 50-Hz rTMS and 13 sham stimulation. The 50-Hz rTMS did not improve gait, bradykinesia, and global and motor UPDRS, but there appeared a short-lived "on"-state improvement in activities of daily living (UPDRS II). The 50-Hz rTMS lengthened the cortical silent period, but other neurophysiological and neuropsychological measures remained unchanged. EMG/EEG recorded no pathological increase of cortical excitability or epileptic activity. There were no adverse effects. CONCLUSION: It appears that 50-Hz rTMS of the motor cortices is safe, but it fails to improve motor performance and functional status in PD. Prolonged stimulation or other techniques with rTMS might be more efficacious but need to be established in future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the management of atrial fibrillation may be difficult in the individual patient, our purpose was to develop simple clinical recommendations to help the general internist manage this common clinical problem. Systematic review of the literature with evaluation of data-related evidence and framing of graded recommendations. Atrial fibrillation affects some 1% of the population in Western countries and is linked to a significant increase in morbidity and mortality. The management of atrial fibrillation requires individualised evaluation of the risks and benefits of therapeutic modalities, relying whenever possible on simple and validated tools. The two main points requiring a decision in clinical management are 1) whether or not to implement thromboembolic prevention therapy, and 2) whether preference should be given to a "rate control" or "rhythm control" strategy. Thromboembolic prophylaxis should be prescribed after individualised risk assessment: for patients at risk, oral anticoagulation with warfarin decreases the rate of embolic complications by 60% and aspirin by 20%, at the expense of an increased incidence of haemorrhagic complications. "Rate control" and "rhythm control" strategies are probably equivalent, and the choice should also be made on an individualised basis. To assist the physician in making his choices for the care of an atrial fibrillation patient we propose specific tables and algorithms, with graded recommendations. On the evidence of data from the literature we propose simple algorithms and tables for the clinical management of atrial fibrillation in the individual patient.