969 resultados para POOR-RISK
Resumo:
OBJECTIVE: To assess the health risk of exposure to benzene for a community affected by a fuel leak. METHODS: Data regarding the fuel leak accident with, which occurred in the Brasilia, Federal District, were obtained from the Fuel Distributor reports provided to the environmental authority. Information about the affected population (22 individuals) was obtained from focal groups of eight individuals. Length of exposure and water benzene concentration were estimated through a groundwater flow model associated with a benzene propagation model. The risk assessment was conducted according to the Agency for Toxic Substances and Disease Registry methodology. RESULTS: A high risk perception related to the health consequences of the accident was evident in the affected community (22 individuals), probably due to the lack of assistance and a poor risk communication from government authorities and the polluting agent. The community had been exposed to unsafe levels of benzene (> 5 µg/L) since December 2001, five months before they reported the leak. The mean benzene level in drinking water (72.2 µg/L) was higher than that obtained by the Fuel Distributer using the Risk Based Corrective Action methodology (17.2 µg/L).The estimated benzene intake from the consumption of water and food reached a maximum of 0.0091 µg/kg bw/day (5 x 10-7 cancer risk per 106 individuals). The level of benzene in water vapor while showering reached 7.5 µg/m3 for children (1 per 104 cancer risk). Total cancer risk ranged from 110 to 200 per 106 individuals. CONCLUSIONS: The population affected by the fuel leak was exposed to benzene levels that might have represented a health risk. Local government authorities need to develop better strategies to respond rapidly to these types of accidents to protect the health of the affected population and the environment.
Treatment intensification and risk factor control: toward more clinically relevant quality measures.
Resumo:
BACKGROUND: Intensification of pharmacotherapy in persons with poorly controlled chronic conditions has been proposed as a clinically meaningful process measure of quality. OBJECTIVE: To validate measures of treatment intensification by evaluating their associations with subsequent control in hypertension, hyperlipidemia, and diabetes mellitus across 35 medical facility populations in Kaiser Permanente, Northern California. DESIGN: Hierarchical analyses of associations of improvements in facility-level treatment intensification rates from 2001 to 2003 with patient-level risk factor levels at the end of 2003. PATIENTS: Members (515,072 and 626,130; age >20 years) with hypertension, hyperlipidemia, and/or diabetes mellitus in 2001 and 2003, respectively. MEASUREMENTS: Treatment intensification for each risk factor defined as an increase in number of drug classes prescribed, of dosage for at least 1 drug, or switching to a drug from another class within 3 months of observed poor risk factor control. RESULTS: Facility-level improvements in treatment intensification rates between 2001 and 2003 were strongly associated with greater likelihood of being in control at the end of 2003 (P < or = 0.05 for each risk factor) after adjustment for patient- and facility-level covariates. Compared with facility rankings based solely on control, addition of percentages of poorly controlled patients who received treatment intensification changed 2003 rankings substantially: 14%, 51%, and 29% of the facilities changed ranks by 5 or more positions for hypertension, hyperlipidemia, and diabetes, respectively. CONCLUSIONS: Treatment intensification is tightly linked to improved control. Thus, it deserves consideration as a process measure for motivating quality improvement and possibly for measuring clinical performance.
Resumo:
BACKGROUND: Assessment of the proportion of patients with well controlled cardiovascular risk factors underestimates the proportion of patients receiving high quality of care. Evaluating whether physicians respond appropriately to poor risk factor control gives a different picture of quality of care. We assessed physician response to control cardiovascular risk factors, as well as markers of potential overtreatment in Switzerland, a country with universal healthcare coverage but without systematic quality monitoring, annual report cards on quality of care or financial incentives to improve quality. METHODS: We performed a retrospective cohort study of 1002 randomly selected patients aged 50-80 years from four university primary care settings in Switzerland. For hypertension, dyslipidemia and diabetes mellitus, we first measured proportions in control, then assessed therapy modifications among those in poor control. "Appropriate clinical action" was defined as a therapy modification or return to control without therapy modification within 12 months among patients with baseline poor control. Potential overtreatment of these conditions was defined as intensive treatment among low-risk patients with optimal target values. RESULTS: 20% of patients with hypertension, 41% with dyslipidemia and 36% with diabetes mellitus were in control at baseline. When appropriate clinical action in response to poor control was integrated into measuring quality of care, 52 to 55% had appropriate quality of care. Over 12 months, therapy of 61% of patients with baseline poor control was modified for hypertension, 33% for dyslipidemia, and 85% for diabetes mellitus. Increases in number of drug classes (28-51%) and in drug doses (10-61%) were the most common therapy modifications. Patients with target organ damage and higher baseline values were more likely to have appropriate clinical action. We found low rates of potential overtreatment with 2% for hypertension, 3% for diabetes mellitus and 3-6% for dyslipidemia. CONCLUSIONS: In primary care, evaluating whether physicians respond appropriately to poor risk factor control, in addition to assessing proportions in control, provide a broader view of the quality of care than relying solely on measures of proportions in control. Such measures could be more clinically relevant and acceptable to physicians than simply reporting levels of control.
Resumo:
Recent theoretical writings suggest that the ineffective regulation of negative emotional states may reduce the ability of women to detect and respond effectively to situational and interpersonal factors that increase risk for sexual assault. However, little empirical research has explored this hypothesis. In the present study, it was hypothesized that prior sexual victimization and negative mood state would each independently predict poor risk recognition and less effective defensive actions in response to an analogue sexual assault vignette. Further, these variables were expected to interact to produce particularly impaired risk responses. Finally, that the in vivo emotion regulation strategy of suppression and corresponding cognitive resource usage (operationalized as memory impairment for the vignette) were hypothesized to mediate these associations. Participants were 668 female undergraduate students who were randomly assigned to receive a negative or neutral film mood induction followed by an audiotaped dating interaction during which they were instructed to indicate when the man had “gone too far” and describe an adaptive response to the situation. Approximately 33.5% of the sample reported a single victimization and 10% reported revictimization. Hypotheses were largely unsupported as sexual victimization history, mood condition, and their interaction did not impact risk recognition or adaptive responding. However, in vivo emotional suppression and cognitive resource usage were shown to predict delayed risk recognition only. Findings suggest that contrary to hypotheses, negative mood (as induced here) may not relate to risk recognition and response impairments. However, it may be important for victimization prevention programs that focus on risk perception to address possible underlying issues with emotional suppression and limited cognitive resources to improve risk perception abilities. Limitations and future directions are discussed.
Resumo:
BACKGROUND Assessment of the proportion of patients with well controlled cardiovascular risk factors underestimates the proportion of patients receiving high quality of care. Evaluating whether physicians respond appropriately to poor risk factor control gives a different picture of quality of care. We assessed physician response to control cardiovascular risk factors, as well as markers of potential overtreatment in Switzerland, a country with universal healthcare coverage but without systematic quality monitoring, annual report cards on quality of care or financial incentives to improve quality. METHODS We performed a retrospective cohort study of 1002 randomly selected patients aged 50-80 years from four university primary care settings in Switzerland. For hypertension, dyslipidemia and diabetes mellitus, we first measured proportions in control, then assessed therapy modifications among those in poor control. "Appropriate clinical action" was defined as a therapy modification or return to control without therapy modification within 12 months among patients with baseline poor control. Potential overtreatment of these conditions was defined as intensive treatment among low-risk patients with optimal target values. RESULTS 20% of patients with hypertension, 41% with dyslipidemia and 36% with diabetes mellitus were in control at baseline. When appropriate clinical action in response to poor control was integrated into measuring quality of care, 52 to 55% had appropriate quality of care. Over 12 months, therapy of 61% of patients with baseline poor control was modified for hypertension, 33% for dyslipidemia, and 85% for diabetes mellitus. Increases in number of drug classes (28-51%) and in drug doses (10-61%) were the most common therapy modifications. Patients with target organ damage and higher baseline values were more likely to have appropriate clinical action. We found low rates of potential overtreatment with 2% for hypertension, 3% for diabetes mellitus and 3-6% for dyslipidemia. CONCLUSIONS In primary care, evaluating whether physicians respond appropriately to poor risk factor control, in addition to assessing proportions in control, provide a broader view of the quality of care than relying solely on measures of proportions in control. Such measures could be more clinically relevant and acceptable to physicians than simply reporting levels of control.
Resumo:
The availability of regular supply has been identified as one of the major stimulants for the growth and development of any nation and is thus important for the economic well-being of a nation. The problems of the Nigerian power sector stems from a lot of factors culminating in her slow developmental growth and inability to meet the power demands of her citizens regardless of the abundance of human and natural resources prevalent in the nation. The research therefore had the main aim of investigating the importance and contributions of risk management to the success of projects specific to the power sector. To achieve this aim it was pertinent to examine the efficacy of risk management process in practice and elucidate the various risks typically associated with projects (Construction, Contractual, Political, Financial, Design, Human resource and Environmental risk factors) in the power sector as well as determine the current situation of risk management practice in Nigeria. To address this factors inhibiting the proficiency of the overarching and prevailing issue which have only been subject to limited in-depth academic research, a rigorous mixed research method was adopted (quantitative and qualitative data analysis). A review of the Nigeria power sector was also carried out as a precursor to the data collection stage. Using purposive sampling technique, respondents were identified and a questionnaire survey was administered. The research hypotheses were tested using inferential statistics (Pearson correlation, Chi-square test, t-test and ANOVA technique) and the findings revealed the need for the development of a new risk management implementation Framework. The proposed Framework was tested within a company project, for interpreting the dynamism and essential benefits of risk management with the aim of improving the project performances (time), reducing the level of fragmentation (quality) and improving profitability (cost) within the Nigerian power sector in order to bridge a gap between theory and practice. It was concluded that Nigeria’s poor risk management practices have prevented it from experiencing strong growth and development. The study however, concludes that the successful implementation of the developed risk management framework may help it to attain this status by enabling it to become more prepared and flexible, to face challenges that previously led to project failures, and thus contributing to its prosperity. The research study provides an original contribution theoretically, methodologically and practically which adds to the project risk management body of knowledge and to the Nigerian power sector.
Resumo:
A total of 53 patients aged 18-60 years with highintermediate or high-risk diffuse large B-cell lymphoma (DLBCL) were evaluated to analyze the impact of the cell of origin. Of 53 patients, 16 underwent autologous SCT (ASCT) in first remission and the rest received conventional chemotherapy. Immunohistochemistry was evaluated in 47 cases 17 were of germinal center (GC) origin and 30 were of non-GC origin. There was no survival difference between the two groups. Overall survival (OS) and disease-free survival (DFS) at 3 years were 93 and 83%, respectively, for the 14 patients who underwent ASCT. Their DFS was significantly better than that of patients who achieved CR but did not undergo ASCT. We conclude that ASCT is safe and improves the DFS of high-intermediate and high-risk DLBCL, regardless of the cell of origin. This observation should be confirmed in a larger study.
Resumo:
Oral busulfan is the historical backbone of the busulfan+cyclophosphamide regimen for autologous stem cell transplantation. However intravenous busulfan has more predictable pharmacokinetics and less toxicity than oral busulfan; we, therefore, retrospectively analyzed data from 952 patients with acute myeloid leukemia who received intravenous busulfan for autologous stem cell transplantation. Most patients were male (n=531, 56%), and the median age at transplantation was 50.5 years. Two-year overall survival, leukemia-free survival, and relapse incidence were 67±2%, 53±2%, and 40±2%, respectively. The non-relapse mortality rate at 2 years was 7±1%. Five patients died from veno-occlusive disease. Overall leukemia-free survival and relapse incidence at 2 years did not differ significantly between the 815 patients transplanted in first complete remission (52±2% and 40±2%, respectively) and the 137 patients transplanted in second complete remission (58±5% and 35±5%, respectively). Cytogenetic risk classification and age were significant prognostic factors: the 2-year leukemia-free survival was 63±4% in patients with good risk cytogenetics, 52±3% in those with intermediate risk cytogenetics, and 37 ± 10% in those with poor risk cytogenetics (P=0.01); patients ≤50 years old had better overall survival (77±2% versus 56±3%; P<0.001), leukemia-free survival (61±3% versus 45±3%; P<0.001), relapse incidence (35±2% versus 45±3%; P<0.005), and non-relapse mortality (4±1% versus 10±2%; P<0.001) than older patients. The combination of intravenous busulfan and high-dose melphalan was associated with the best overall survival (75±4%). Our results suggest that the use of intravenous busulfan simplifies the autograft procedure and confirm the usefulness of autologous stem cell transplantation in acute myeloid leukemia. As in allogeneic transplantation, veno-occlusive disease is an uncommon complication after an autograft using intravenous busulfan.
Resumo:
BACKGROUND: Poorly controlled cardiovascular risk factors are common. Evaluating whether physicians respond appropriately to poor risk factor control in patients may better reflect quality of care than measuring proportions of patients whose conditions are controlled. OBJECTIVES: To evaluate therapy modifications in response to poor control of hypertension, dyslipidemia, or diabetes in a large clinical population. DESIGN: Retrospective cohort study within an 18-month period in 2002 to 2003. SETTING: Kaiser Permanente of Northern California. PATIENTS: 253,238 adult members with poor control of 1 or more of these conditions. MEASUREMENTS: The authors assessed the proportion of patients with poor control who experienced a change in pharmacotherapy within 6 months, and they defined "appropriate care" as a therapy modification or return to control without therapy modification within 6 months. RESULTS: A total of 64% of patients experienced modifications in therapy for poorly controlled systolic blood pressure, 71% for poorly controlled diastolic blood pressure, 56% for poorly controlled low-density lipoprotein cholesterol level, and 66% for poorly controlled hemoglobin A1c level. Most frequent modifications were increases in number of drug classes (from 70% to 84%) and increased dosage (from 15% to 40%). An additional 7% to 11% of those with poorly controlled blood pressure, but only 3% to 4% of those with elevated low-density lipoprotein cholesterol level or hemoglobin A1c level, returned to control without therapy modification. Patients with more than 1 of the 3 conditions, higher baseline values, and target organ damage were more likely to receive "appropriate care." LIMITATIONS: Patient preferences and suboptimal adherence to therapy were not measured and may explain some failures to act. CONCLUSIONS: As an additional measure of the quality of care, measuring therapy modifications in response to poor control in a large population is feasible. Many patients with poorly controlled hypertension, dyslipidemia, or diabetes had their therapy modified and, thus, seemed to receive clinically "appropriate care" with this new quality measure.
Resumo:
PURPOSE: The prognostic impact of complete response (CR) achievement in multiple myeloma (MM) has been shown mostly in the context of autologous stem-cell transplantation. Other levels of response have been defined because, even with high-dose therapy, CR is a relatively rare event. The purpose of this study was to analyze the prognostic impact of very good partial response (VGPR) in patients treated with high-dose therapy. PATIENTS AND METHODS: All patients were included in the Intergroupe Francophone du Myelome 99-02 and 99-04 trials and treated with vincristine, doxorubicin, and dexamethasone (VAD) induction therapy followed by double autologous stem-cell transplantation (ASCT). Best post-ASCT response assessment was available for 802 patients. RESULTS: With a median follow-up of 67 months, median event-free survival (EFS) and 5-year EFS were 42 months and 34%, respectively, for 405 patients who achieved at least VGPR after ASCT versus 32 months and 26% in 288 patients who achieved only partial remission (P = .005). Five-year overall survival (OS) was significantly superior in patients achieving at least VGPR (74% v 61% P = .0017). In multivariate analysis, achievement of less than VGPR was an independent factor predicting shorter EFS and OS. Response to VAD had no impact on EFS and OS. The impact of VGPR achievement on EFS and OS was significant in patients with International Staging System stages 2 to 3 and for patients with poor-risk cytogenetics t(4;14) or del(17p). CONCLUSION: In the context of ASCT, achievement of at least VGPR is a simple prognostic factor that has importance in intermediate and high-risk MM and can be informative in more patients than CR.
Resumo:
In the 2005-01 trial, we have demonstrated that bortezomib-dexamethasone as induction therapy before autologous stem cell transplantation was superior to vincristine-adriamycin-dexamethasone. We conducted a post-hoc analysis to assess the prognostic impact of initial characteristics as well as response to therapy in patients enrolled in this study. Multivariate analysis showed that ISS stages 2 and 3 and achievement of response less than very good partial response (VGPR) both after induction therapy and after autologous stem cell transplantation were adverse prognostic factors for progression-free survival, the most important one being achievement of response less than VGPR after induction. Progression-free survival was significantly improved with bortezomib-dexamethasone induction therapy in patients with poor-risk cytogenetics and ISS stages 2 and 3 compared with vincristine-adriamycin-dexamethasone. In these 2 groups of patients, achievement of at least VGPR after induction was of major importance. This study is registered with EudraCT (https://eudract.ema.europa.eu; EUDRACT 2005-000537-38) and http://clinicaltrials.gov (NCT00200681).
Resumo:
There is ample epidemiological and anecdotal evidence that a PFO increases the risk of stroke both in young and elderly patients, although only in a modest way: PFOs are more prevalent in patients with cryptogenic (unexplained) stroke than in healthy subjects, and are more prevalent in cryptogenic stroke than in strokes of other causes. Furthermore, multiple case series confirm an association of paradoxical embolism across a PFO in patients with deep vein thrombosis and/or pulmonary emboli.2. Is stroke recurrence risk in PFO-patients really not elevated when compared to PFO-free patients, as suggested by traditional observational studies? This finding is an epidemiological artifact called "the paradox of recurrence risk research" (Dahabreh & Kent, JAMA 2011) and is due to one (minor) risk factor, such as PFO, being wiped out by other, stronger risk factors in the control population.3. Having identified PFO as a risk factor for a first stroke and probably also for recurrences, we have to treat it, because treating risk factors always has paid off. No one would nowadays question the aggressive treatment of other risk factors of stroke such as hypertension, atrial fibrillation, smoking, or hyperlipidemia.4. In order to be effective, the preventive treatment has to control the risk factor (i.e. close effectively the PFO), and has to have little or no side effects. Both these conditions are now fulfilled thanks to increasing expertise of cardiologists with technically advanced closure devices and solid back up by multidisciplinary stroke teams.5. Closing a PFO does not dispense us from treating other stroke risk factors aggressively, given that these are cumulative with PFO.6. The most frequent reason why patients have a stroke recurrence after PFO closure is not that closure is ineffective, but that the initial stroke etiology is insufficiently investigated and not PFO related, and that the recurrence is due to another mechanism because of poor risk factor control.7. Similarly, the randomized CLOSURE study was negative because a) patients were included who had a low chance that their initial event was due to the PFO, b) patients were selected with a low chance that a PFO-related recurrence would occur, c) there was an unacceptable high rate of closure-related side effects, and d) the number of randomized patients was too small for a prevention trial.8. It is only a question of time until a sufficiently large randomized clinical trial with true PFO-related stroke patients and a high PFO-related recurrence risk will be performed and show the effectiveness of this closure9. PFO being a rather modest risk factor for stroke does not mean we should prevent our patients from getting the best available prevention by the best physicians in the best stroke centers Therefore, a PFO-closure performed by an excellent cardiologist following the recommendation of an expert neurovascular specialist after a thorough workup in a leading stroke center is one of the most effective stroke prevention treatments available in 2011.
Resumo:
We report on two elderly patients with newly diagnosed acute myeloid leukemia (AML) who were treated in palliative intention because of comorbidities and intermediate or poor risk cytogenetics. Both received G-CSF to reduce the risk of infection related to neutropenia. Interestingly, one patient achieved a full hematological remission and the other a peripheral remission with dramatic reduction of the bone marrow blast count. Although a direct therapeutic effect of myeloid growth factors seems to be unusual in AML, the use of G-CSF or GM-CSF may be recommended in patients such as elderly patients who are not suited for intensive chemotherapy.
Resumo:
China’s financial system has experienced a series of major reforms in recent years. Efforts have been made towards introducing the shareholding system in state-owned commercial banks, restructuring of securities firms, re-organising equity of joint venture insurance companies, further improving the corporate governance structure, managing financial risks and ultimately establishing a system to protect investors (Xinhua, 2010). Financial product innovation, with the further opening up of financial markets and the development of the insurance and bond market, has increased liquidity as well as reduced financial risks. The U.S. subprime crisis indicated the benefit of financial innovations for the economy, but without proper control, they may lead to unexpected consequences. Kirkpatrick (2009) argues that failures and weaknesses in corporate governance arrangements and insufficient accounting standards and regulatory requirements attributed to the financial crisis. Similar to the financial crises of the last decade, the global financial crisis which sparked in 2008, surfaced a variety of significant corporate governance failures: the dysfunction of market mechanisms, the lack of transparency and accountability, misaligned compensation arrangements and the late response of government, all which encouraged management short-termism, poor risk management, as well as some fraudulent schemes. The unique characteristics of the Chinese banking system are an interesting point for studying post-crisis corporate governance reform. Considering that China modelled its governance system on the Anglo-American system, this paper examines the impact of the financial crisis on corporate governance reform in developed economies, and particularly, China’s reform of its financial sector. The paper further analyses the Chinese government’s role in bank supervision and risk management. In this regard, the paper contributes to the corporate governance literature within the Chinese context by providing insights into the contributing factors to the corporate governance failure that led to the global financial crisis. It also provides policy recommendations for China’s policy makers to seriously consider. The results suggest a need for the re-examination of corporate governance adequacy and the institutionalisation of business ethics. The paper’s next section provides a review of China’s financial system with reference to the financial crisis, followed by a critical evaluation of a capitalistic system and a review of Anglo-American and Continental European models. It then analyses the need for a new corporate governance model in China by considering the bank failures in developed economies and the potential risks and inefficiencies in a current State controlled system. The paper closes by reflecting the need for Chinese policy makers to continually develop, adapt and rewrite corporate governance practices capable of meeting the new challenge, and to pay attention to business ethics, an issue which goes beyond regulation.
Resumo:
Por muito tempo, os programas de qualidade e gestão vêm dominando o cenário de soluções para a melhoria das organizações. Nesse contexto, têm sido constantes os modismos e, ao mesmo tempo, o consumo de muitas soluções. E o mercado parece se nutrir desses pacotes. De pacote em pacote, a gestão nas companhias vem se desenvolvendo e se especializando e, junto como isso, os métodos e técnicas da boa gestão. A gestão de riscos, especificamente, vem nesse contexto. Apresenta-se como uma solução em gestão, mas parece se estabelecer como ponto de convergência e de influência na otimização e garantia de processos produtivos, de gestão e de suporte, abrangendo uma gama de possibilidades em diversas disciplinas do mundo empresarial, desde finanças até os aspectos de fraudes e distúrbios cotidianos. Na sequência da gestão de riscos, vem a gestão de crises. Esta, o hemisfério dos riscos transformados em impactos verdadeiros e com danos visíveis para a organização. A gestão de crises pressupõe a gestão dos riscos consumados e que, claramente, afetam a organização e seu contexto, tanto interno quanto externo. No ponto final dessa lógica, aparece a confiança como aquilo que sela o pacto da organização e seus stakeholders, como resultado da boa ou má gestão de riscos e crises. Este estudo é, então, sobre riscos, crises e confiança e sobre o entendimento de como a gestão sobrepõe esses elementos e como isso se aplica às organizações, especificamente a uma grande organização do mercado brasileiro. Após revisão bibliográfica, é feita uma pesquisa para se analisar como está sendo a aplicação dos conceitos e práticas de riscos e crises em uma grande empresa, entrevistando-se o principal executivo da companhia responsável pela gestão de riscos na área de segurança da informação e outro responsável pela gestão de crises. É feita uma pesquisa para se entender a percepção de empregados e gerentes da companhia estudada sobre os conceitos e práticas de gestão de riscos e crises e suas aplicações na companhia. Também é feita uma abordagem sobre confiança, extrapolando-se esse conceito para uma idéia de confiabilidade dessas práticas e propondo uma forma de medir essa confiabilidade, identificada como uma lacuna na gestão de crises. Ao final, é feita uma análise sobre o quanto a aplicação desses conceitos e práticas são sistemáticos ou não, de acordo com as hipóteses e suposições definidas, constatando-se o 9 caráter operacional e a aplicação recente das práticas, um tanto deslocada do modelo de referência proposto para o estudo e com pouca visibilidade por parte dos empregados, principalmente no que tange a percepção de efetividade das práticas adotadas.