307 resultados para attract physicians
The impotence of price controls: failed attempts to constrain pharmaceutical expenditures in Greece.
Resumo:
BACKGROUND: While the prices of pharmaceuticals are relatively low in Greece, expenditure on them is growing more rapidly than almost anywhere else in the European Union. OBJECTIVE: To describe and explain the rise in drug expenditures through decomposition of the increase into the contribution of changes in prices, in volumes and a product-mix effect. METHODS: The decomposition of the growth in pharmaceutical expenditures in Greece over the period 1991-2006 was conducted using data from the largest social insurance fund (IKA) that covers more than 50% of the population. RESULTS: Real drug spending increased by 285%, despite a 58% decrease in the relative price of pharmaceuticals. The increase in expenditure is mainly attributable to a switch to more innovative, but more expensive, pharmaceuticals, indicated by a product-mix residual of 493% in the decomposition. A rising volume of drugs also plays a role, and this is due to an increase in the number of prescriptions issued per doctor visit, rather than an increase in the number of visits or the population size. CONCLUSIONS: Rising pharmaceutical expenditures are strongly determined by physicians' prescribing behaviour, which is not subject to any monitoring and for which there are no incentives to be cost conscious.
Resumo:
OBJECTIVE: To develop a provisional definition for the evaluation of response to therapy in juvenile dermatomyositis (DM) based on the Paediatric Rheumatology International Trials Organisation juvenile DM core set of variables. METHODS: Thirty-seven experienced pediatric rheumatologists from 27 countries achieved consensus on 128 difficult patient profiles as clinically improved or not improved using a stepwise approach (patient's rating, statistical analysis, definition selection). Using the physicians' consensus ratings as the "gold standard measure," chi-square, sensitivity, specificity, false-positive and-negative rates, area under the receiver operating characteristic curve, and kappa agreement for candidate definitions of improvement were calculated. Definitions with kappa values >0.8 were multiplied by the face validity score to select the top definitions. RESULTS: The top definition of improvement was at least 20% improvement from baseline in 3 of 6 core set variables with no more than 1 of the remaining worsening by more than 30%, which cannot be muscle strength. The second-highest scoring definition was at least 20% improvement from baseline in 3 of 6 core set variables with no more than 2 of the remaining worsening by more than 25%, which cannot be muscle strength (definition P1 selected by the International Myositis Assessment and Clinical Studies group). The third is similar to the second with the maximum amount of worsening set to 30%. This indicates convergent validity of the process. CONCLUSION: We propose a provisional data-driven definition of improvement that reflects well the consensus rating of experienced clinicians, which incorporates clinically meaningful change in core set variables in a composite end point for the evaluation of global response to therapy in juvenile DM.
Resumo:
Screening for colorectal cancer (CRC) is associated with reduced CRC mortality, but low screening rates have been reported in several settings. The aim of the study was to assess predictors of low CRC screening in Switzerland. A retrospective cohort of a random sample of 940 patients aged 50-80 years followed for 2 years from four Swiss University primary care settings was used. Patients with illegal residency status and a history of CRC or colorectal polyps were excluded. We abstracted sociodemographic data of patients and physicians, patient health status, and indicators derived from RAND's Quality Assessment Tools from medical charts. We defined CRC screening as colonoscopy in the last 10 years, flexible sigmoidoscopy in the last 5 years, or fecal occult blood testing in the last 2 years. We used bivariate and multivariate logistic regression analyses. Of 940 patients (mean age 63.9 years, 42.7% women), 316 (33.6%) had undergone CRC screening. In multivariate analysis, birthplace in a country outside of Western Europe and North America [odds ratio (OR) 0.65, 95% confidence interval (CI) 0.45-0.97], male sex of the physician in charge (OR 0.67, 95% CI 0.50-0.91), BMI 25.0-29.9 kg/m (OR 0.66, CI 0.46-0.96) and at least 30.0 kg/m (OR 0.61, CI 0.40-0.90) were associated with lower CRC screening rates. Obesity, overweight, birthplace outside of Western Europe and North America, and male sex of the physician in charge were associated with lower CRC screening rates in Swiss University primary care settings. Physician perception of obesity and its impact on their recommendation for CRC screening might be a target for further research.
Resumo:
QUESTION UNDER STUDY: Emergency room (ER) interpretation of the ECG is critical to assessment of patients with acute coronary syndromes (ACS). Our aim was to assess its reliability in our institution, a tertiary teaching hospital. METHODS: Over a 6-month period all consecutive patients admitted for ACS were included in the study. ECG interpretation by emergency physicians (EPs) was recorded on a preformatted sheet and compared with the interpretation of two specialist physicians (SPs). Discrepancies between the 2 specialists were resolved by an ECG specialist. RESULTS: Over the 6-month period, 692 consecutive patients were admitted with suspected ACS. ECG interpretation was available in 641 cases (93%). Concordance between SPs was 87%. Interpretation of normality or abnormality of the ECG was concordant between EPs and SPs in 475 cases (74%, kappa = 0.51). Interpretation of ischaemic modifications was concordant in 69% of cases, and as many ST segment elevations were unrecognised as overdiagnosed (5% each). The same findings occurred for ST segment depressions and negative T waves (12% each). CONCLUSIONS: Interpretation of the ECG recorded during ACS by 2 SPs was discrepant in 13% of cases. Similarly, EP interpretation was discrepant from SP interpretation in 25% of cases, equally distributed between over- and underdiagnosing of ischaemic changes. The clinical implications and impact of medical education on ECG interpretation require further study.
Resumo:
OBJECTIVE: To identify predictors of improved asthma control under conditions of everyday practice in Switzerland. RESEARCH DESIGN AND METHODS: A subgroup of 1380 patients with initially inadequately controlled asthma was defined from a cohort of 1893 asthmatic patients (mean age 45.3 + or - 19.2 years) recruited by 281 office-based physicians who participated in a previously-conducted asthma control survey in Switzerland. Multiple regression techniques were used to identify predictors of improved asthma control, defined as an absolute decrease of 0.5 points or more in the Asthma Control Questionnaire between the baseline (V1) and follow-up visit (V2). RESULTS: Asthma control between V1 and V2 improved in 85.7%. Add-on treatment with montelukast was reported in 82.9% of the patients. Patients with worse asthma control at V1 and patients with good self-reported adherence to therapy had significantly higher chances of improved asthma control (OR = 1.24 and 1.73, 95% CI 1.18-1.29 and 1.20-2.50, respectively). Compared to adding montelukast and continuing the same inhaled corticosteroid/fixed combination (ICS/FC) dose, the addition of montelukast to an increased ICS/FC dose yielded a 4 times higher chance of improved asthma control (OR = 3.84, 95% CI 1.58-9.29). Significantly, withholding montelukast halved the probability of achieving improved asthma control (OR = 0.51, 95% CI = 0.33-078). The probability of improved asthma control was almost 5 times lower among patients in whom FEV(1) was measured compared to those in whom it was not (OR = 0.23, 95% CI = 0.09-0.55). Patients with severe persistent asthma also had a significantly lower probability of improved control (OR = 0.15, 95% CI = 0.07-0.32), as did older patients (OR = 0.98, 95% CI = 0.97-0.99). Subgroup analyses which excluded patients whose asthma may have been misdiagnosed and might in reality have been chronic obstructive pulmonary disease (COPD) showed comparable results. CONCLUSIONS: Under conditions of everyday clinical practice, the addition of montelukast to ICS/FC and good adherence to therapy increased the likelihood of achieving better asthma control at the follow-up visit, while older age and more severe asthma significantly decreased it.
Resumo:
Since its introduction 16 years ago, the astrocyte-neuron lactate shuttle (ANLS) model has profoundly modified our understanding of neuroenergetics by bringing a cellular and molecular resolution. Praised or disputed, the concept has never ceased to attract attention, leading to critical advances and unexpected insights. Here, we summarize recent experimental evidence further supporting the main tenets of the model. Thus, evidence for distinct metabolic phenotypes between neurons (mainly oxidative) and astrocytes (mainly glycolytic) have been provided by genomics and classical metabolic approaches. Moreover, it has become clear that astrocytes act as a syncytium to distribute energy substrates such as lactate to active neurones. Glycogen, the main energy reserve located in astrocytes, is used as a lactate source to sustain glutamatergic neurotransmission and synaptic plasticity. Lactate is also emerging as a neuroprotective agent as well as a key signal to regulate blood flow. Characterization of monocarboxylate transporter regulation indicates a possible involvement in synaptic plasticity and memory. Finally, several modeling studies captured the implications of such findings for many brain functions. The ANLS model now represents a useful, experimentally based framework to better understand the coupling between neuronal activity and energetics as it relates to neuronal plasticity, neurodegeneration, and functional brain imaging.
Resumo:
Atrial fibrillation (AF) is the most common arrhythmia and among the leading causes of stroke and heart failure in Western populations. Despite the increasing size of clinical trials assessing the efficacy and safety of AF therapies, achieved outcomes have not always matched expectations. Considering that AF is a symptom of many possible underlying diseases, clinical research for this arrhythmia should take into account their respective pathophysiology. Accordingly, the definition of the study populations to be included should rely on the established as well as on the new classifications of AF and take advantage from a differentiated look at the AF-electrocardiogram and from increasingly large spectrum of biomarkers. Such an integrated approach could bring researchers and treating physicians one step closer to the ultimate vision of personalized therapy, which, in this case, means an AF therapy based on refined diagnostic elements in accordance with scientific evidence gathered from clinical trials. By applying clear-cut patient inclusion criteria, future studies will be of smaller size and thus of lower cost. In addition, the findings from such studies will be of greater predictive value at the individual patient level, allowing for pinpointed therapeutic decisions in daily practice.
Resumo:
BACKGROUND: Poorly controlled cardiovascular risk factors are common. Evaluating whether physicians respond appropriately to poor risk factor control in patients may better reflect quality of care than measuring proportions of patients whose conditions are controlled. OBJECTIVES: To evaluate therapy modifications in response to poor control of hypertension, dyslipidemia, or diabetes in a large clinical population. DESIGN: Retrospective cohort study within an 18-month period in 2002 to 2003. SETTING: Kaiser Permanente of Northern California. PATIENTS: 253,238 adult members with poor control of 1 or more of these conditions. MEASUREMENTS: The authors assessed the proportion of patients with poor control who experienced a change in pharmacotherapy within 6 months, and they defined "appropriate care" as a therapy modification or return to control without therapy modification within 6 months. RESULTS: A total of 64% of patients experienced modifications in therapy for poorly controlled systolic blood pressure, 71% for poorly controlled diastolic blood pressure, 56% for poorly controlled low-density lipoprotein cholesterol level, and 66% for poorly controlled hemoglobin A1c level. Most frequent modifications were increases in number of drug classes (from 70% to 84%) and increased dosage (from 15% to 40%). An additional 7% to 11% of those with poorly controlled blood pressure, but only 3% to 4% of those with elevated low-density lipoprotein cholesterol level or hemoglobin A1c level, returned to control without therapy modification. Patients with more than 1 of the 3 conditions, higher baseline values, and target organ damage were more likely to receive "appropriate care." LIMITATIONS: Patient preferences and suboptimal adherence to therapy were not measured and may explain some failures to act. CONCLUSIONS: As an additional measure of the quality of care, measuring therapy modifications in response to poor control in a large population is feasible. Many patients with poorly controlled hypertension, dyslipidemia, or diabetes had their therapy modified and, thus, seemed to receive clinically "appropriate care" with this new quality measure.
Resumo:
AIM: To document the feasibility and report the results of dosing darbepoetin-alpha at extended intervals up to once monthly (QM) in a large dialysis patient population. MATERIAL: 175 adult patients treated, at 23 Swiss hemodialysis centres, with stable doses of any erythropoiesis-stimulating agent who were switched by their physicians to darbepoetin-alpha treatment at prolonged dosing intervals (every 2 weeks [Q2W] or QM). METHOD: Multicentre, prospective, observational study. Patients' hemoglobin (Hb) levels and other data were recorded 1 month before conversion (baseline) to an extended darbepoetin-alpha dosing interval, at the time of conversion, and once monthly thereafter up to the evaluation point (maximum of 12 months or until loss to follow-up). RESULTS: Data for 161 evaluable patients from 23 sites were included in the final analysis. At 1 month prior to conversion, 73% of these patients were receiving darbepoetin-alpha weekly (QW) and 27% of the patients biweekly (Q2W). After a mean follow-up of 9.5 months, 34% received a monthly (QM) dosing regimen, 52% of the patients were receiving darbepoetin-alpha Q2W, and 14% QW. The mean (SD) Hb concentration at baseline was 12.3 +/- 1.2 g/dl, compared to 11.9 +/- 1.2 g/dl at the evaluation point. The corresponding mean weekly darbepoetin-alpha dose was 44.3 +/- 33.4 microg at baseline and 37.7 +/- 30.8 microg at the evaluation point. CONCLUSIONS: Conversion to extended darbepoetin-alpha dosing intervals of up to QM, with maintenance of initial Hb concentrations, was successful for the majority of stable dialysis patients.
Resumo:
Acute kidney injury is common in critical illness and associated with important morbidity and mortality. Continuous renal replacement therapy (CRRT) enables physicians to safely and efficiently control associated metabolic and fluid balance disorders. The insertion of a large central venous catheter is required, which can be associated with mechanical and infectious complications. CRRT requires anticoagulation, which currently relies on heparin in most cases although citrate could become a standard in a near future. The choice of the substitution fluid depends on the clinical situation. A dose of 25 ml/kg/h is currently recommended.
Resumo:
This study seeks to perform a survey of patterns of practice among the different physicians involved in the bone metastases management, with special focus on external beam radiotherapy (EBRT).A questionnaire about bone metastases based on clinical cases and supplemented with general questions, including medical therapies, EBRT and metabolic radiotherapy strategies, surgery, and supportive care approaches, was sent to 4,706 French-speaking physicians in Belgium, France, Luxemburg, and Switzerland.Overall, 644 questionnaires were analyzed. Twenty-eight percent concerned the radiotherapy approach and were judged adequate to respond to the part dedicated to EBRT. Sixty-nine percent of physicians used a total dose irradiation of 30 Gy delivered in ten fractions. A large majority (75%) used two opposed fields prescribed at mid-depth (30%), or with non-equally weighted fields (45%). Seventy percent irradiated also above and below the concerned vertebra. A dosimetry planning treatment was done in 85% and high-energy megavoltage photons were used in 42%. Moreover, 54% physicians used short course radiotherapy in routine.Radiotherapy remains the mainstay of treatment of bone metastases, but there is substantial heterogeneity in clinical practice. Guidelines and treatment protocols are required to improve the treatment quality.
Resumo:
Background Working in a teaching hospital is a highly stressful occupation, which can lead to burnout. The consequences of burnout in health professionals can be very serious, both for themselves and patients. The aim of this cross-sectional study was to assess the extent of burnout and associated factors in hospital employees. Methods In the Fall of 2007, all employees of a Swiss teaching hospital were invited to complete a job satisfaction survey. It included the work-related burnout scale (scored 0-100) of the Copenhagen Burnout Inventory (CBI-French version), measuring the degree of physical and psychological fatigue and exhaustion perceived as related to the person's work; a high degree of burnout was defined as a score _50. Logistic regression analyses were used to determine factors associated with a high degree of burnout. Results A total of 4575 individuals returned the questionnaire (response rate 54%). Of them, 1503 (33%) had a high degree of burnout. The rate of burnout was higher among women (34.3% versus 30.5%, P = 0.012) and respondents younger than 40 years (37.7% versus 28.6%, P < 0.001). Executives were less prone to burnout than employees (27.1% versus 33.9%, P < 0.0019). Rates of burnout differed by profession: nurses and physicians had higher rates than administrative and logistic staff (42.8% and 37.4% versus 25.6% and 20.9%, respectively P < 0.001). Burnout was inversely associated with job satisfaction. In multivariate analysis, factors associated with burnout were overall dissatisfaction (OR 3.23; 95% CI 2.66-3.91), dissatisfaction with workload (OR 2.09; 95% CI 1.74-2.51) and work-life balance (OR 2.25; 95% CI 1.83-2.77), being a woman (OR 1.56; 95% CI 1.28-1.90), working fulltime (OR 1.41; 95% CI 1.08-1.85) and working as a nurse, a physician or in the psychosocial sector. Conclusions One-third of respondents, mostly nurses and physicians, experienced burnout and had lower levels of job satisfaction. The factors associated with burnout may help to tailor programmes aiming at reducing burnout at both the individual and organizational level within the hospital.
Resumo:
The huge conservation interest that mammals attract and the large datasets that have been collected on them have propelled a diversity of global mammal prioritization schemes, but no comprehensive global mammal conservation strategy. We highlight some of the potential discrepancies between the schemes presented in this theme issue, including: conservation of species or areas, reactive and proactive conservation approaches, conservation knowledge and action, levels of aggregation of indicators of trend and scale issues. We propose that recently collected global mammal data and many of the mammal prioritization schemes now available could be incorporated into a comprehensive global strategy for the conservation of mammals. The task of developing such a strategy should be coordinated by a super-partes, authoritative institution (e.g. the International Union for Conservation of Nature, IUCN). The strategy would facilitate funding agencies, conservation organizations and national institutions to rapidly identify a number of short-term and long-term global conservation priorities, and act complementarily to achieve them.
Resumo:
Surgical decision-making in lumbar spinal stenosis involves assessment of clinical parameters and the severity of the radiological stenosis. We suspected that surgeons based surgical decisions more on dural sac cross-sectional area (DSCA) than on the morphology of the dural sac. We carried out a survey among members of three European spine societies. The axial T2-weighted MR images from ten patients with varying degrees of DSCA and morphological grades according to the recently described morphological classification of lumbar spinal stenosis, with DSCA values disclosed in half the assessed images, were used for evaluation. We provided a clinical scenario to accompany the images, which were shown to 142 responding physicians, mainly orthopaedic surgeons but also some neurosurgeons and others directly involved in treating patients with spinal disorders. As the primary outcome we used the number of respondents who would proceed to surgery for a given DSCA or morphological grade. Substantial agreement among the respondents was observed, with severe or extreme stenosis as defined by the morphological grade leading to surgery. This decision was not dependent on the number of years in practice, medical density or specialty. Disclosing the DSCA did not alter operative decision-making. In all, 40 respondents (29%) had prior knowledge of the morphological grading system, but their responses showed no difference from those who had not. This study suggests that the participants were less influenced by DSCA than by the morphological appearance of the dural sac. Classifying lumbar spinal stenosis according to morphology rather than surface measurements appears to be consistent with current clinical practice.