126 resultados para 146-892D
Resumo:
Background A pandemic strain of influenza A spread rapidly around the world in 2009, now referred to as pandemic (H1N1) 2009. This study aimed to examine the spatiotemporal variation in the transmission rate of pandemic (H1N1) 2009 associated with changes in local socio-environmental conditions from May 7–December 31, 2009, at a postal area level in Queensland, Australia. Method We used the data on laboratory-confirmed H1N1 cases to examine the spatiotemporal dynamics of transmission using a flexible Bayesian, space–time, Susceptible-Infected-Recovered (SIR) modelling approach. The model incorporated parameters describing spatiotemporal variation in H1N1 infection and local socio-environmental factors. Results The weekly transmission rate of pandemic (H1N1) 2009 was negatively associated with the weekly area-mean maximum temperature at a lag of 1 week (LMXT) (posterior mean: −0.341; 95% credible interval (CI): −0.370–−0.311) and the socio-economic index for area (SEIFA) (posterior mean: −0.003; 95% CI: −0.004–−0.001), and was positively associated with the product of LMXT and the weekly area-mean vapour pressure at a lag of 1 week (LVAP) (posterior mean: 0.008; 95% CI: 0.007–0.009). There was substantial spatiotemporal variation in transmission rate of pandemic (H1N1) 2009 across Queensland over the epidemic period. High random effects of estimated transmission rates were apparent in remote areas and some postal areas with higher proportion of indigenous populations and smaller overall populations. Conclusions Local SEIFA and local atmospheric conditions were associated with the transmission rate of pandemic (H1N1) 2009. The more populated regions displayed consistent and synchronized epidemics with low average transmission rates. The less populated regions had high average transmission rates with more variations during the H1N1 epidemic period.
Resumo:
Background Poor clinical handover has been associated with inaccurate clinical assessment and diagnosis, delays in diagnosis and test ordering, medication errors and decreased patient satisfaction in the acute care setting. Research on the handover process in the residential aged care sector is very limited. Purpose The aims of this study were to: (i) Develop an in-depth understanding of the handover process in aged care by mapping all the key activities and their information dynamics, (ii) Identify gaps in information exchange in the handover process and analyze implications for resident safety, (iii) Develop practical recommendations on how information communication technology (ICT) can improve the process and resident safety. Methods The study was undertaken at a large metropolitan facility in NSW with more than 300 residents and a staff including 55 registered nurses (RNs) and 146 assistants in nursing (AINs). A total of 3 focus groups, 12 interviews and 3 observation sessions were conducted over a period from July to October 2010. Process mapping was undertaken by translating the qualitative data via a five-category code book that was developed prior to the analysis. Results Three major sub-processes were identified and mapped. The three major stages are Handover process (HOP) I “Information gathering by RN”, HOP II “Preparation of preliminary handover sheet” and HOP III “Execution of handover meeting”. Inefficient processes were identified in relation to the handover including duplication of information, utilization of multiple communication modes and information sources, and lack of standardization. Conclusion By providing a robust process model of handover this study has made two critical contributions to research in aged care: (i) a means to identify important, possibly suboptimal practices; and (ii) valuable evidence to plan and improve ICT implementation in residential aged care. The mapping of this process enabled analysis of gaps in information flow and potential impacts on resident safety. In addition it offers the basis for further studies into a process that, despite its importance for securing resident safety and continuity of care, lacks research.
Resumo:
We conducted two studies to improve our understanding of why and when older workers are focused on learning. Based on socioemotional selectivity theory, which proposes that goal focus changes with age and the perception of time, we hypothesized and found that older workers perceive their remaining time at work as more limited than younger workers which, in turn, is associated with lower learning goal orientation and a less positive attitude toward learning and development. Furthermore, we hypothesized and found that high work centrality buffers the negative association between age and perceived remaining time, and thus the indirect negative effects of age on learning goal orientation and attitude toward learning and development (through perceived remaining time). These findings suggest that scholars and practitioners should take workers’ perceived remaining time and work centrality into account when examining or stimulating learning activities among aging workers.
Resumo:
Background: The irreversible epidermal growth factor receptor (EGFR) inhibitors have demonstrated efficacy in NSCLC patients with activating EGFR mutations, but it is unknown if they are superior to the reversible inhibitors. Dacomitinib is an oral, small-molecule irreversible inhibitor of all enzymatically active HER family tyrosine kinases. Methods: The ARCHER 1009 (NCT01360554) and A7471028 (NCT00769067) studies randomized patients with locally advanced/metastatic NSCLC following progression with one or two prior chemotherapy regimens to dacomitinib or erlotinib. EGFR mutation testing was performed centrally on archived tumor samples. We pooled patients with exon 19 deletion and L858R EGFR mutations from both studies to compare the efficacy of dacomitinib to erlotinib. Results: One hundred twenty-one patients with any EGFR mutation were enrolled; 101 had activating mutations in exon 19 or 21. For patients with exon19/21 mutations, the median progression-free survival was 14.6 months [95% confidence interval (CI) 9.0–18.2] with dacomitinib and 9.6 months (95% CI 7.4–12.7) with erlotinib [unstratified hazard ratio (HR) 0.717 (95% CI 0.458–1.124), two-sided log-rank, P = 0.146]. The median survival was 26.6 months (95% CI 21.6–41.5) with dacomitinib versus 23.2 months (95% CI 16.0–31.8) with erlotinib [unstratified HR 0.737 (95% CI 0.431–1.259), two-sided log-rank, P = 0.265]. Dacomitinib was associated with a higher incidence of diarrhea and mucositis in both studies compared with erlotinib. Conclusions: Dacomitinib is an active agent with comparable efficacy to erlotinib in the EGFR mutated patients. The subgroup with exon 19 deletion had favorable outcomes with dacomitinib. An ongoing phase III study will compare dacomitinib to gefitinib in first-line therapy of patients with NSCLC harboring common activating EGFR mutations (ARCHER 1050; NCT01774721). Clinical trials number: ARCHER 1009 (NCT01360554) and A7471028 (NCT00769067).
Resumo:
The purpose of this study is to investigate the accounting choice decisions of banks to employ Level 3 inputs in estimating the value of their financial assets and liabilities. Using a sample of 146 bank-year observations from 18 countries over 2009-2012, this study finds banks’ incentives to use Level 3 valuation inputs are associated with both firm-level and country-level determinants. At the firm-level, leverage, profitability (in term of net income), Tier 1 capital ratio, size and audit committee independence are associated with the percentage of Level 3 valuation inputs. At the country-level, economy development, legal region, legal enforcement and investor rights are also associated with the Level 3 classification choice. Lastly, ‘secrecy’, the proxy for culture dimensions and values, is found to be positively associated with the use of Level 3 valuation inputs. Altogether, these findings suggest that banks use the discretion available under Level 3 inputs opportunistically to avoid violating debt covenants limits, to increase earnings and manage their capital ratios. Results of this study also highlight that corporate governance quality at the firm-level (e.g. audit committee independence) and institutional features can constrain banks’ opportunistic behaviors in using the discretion available under Level 3 inputs. The results of this study have important implications for standard setters and contribute to the debate on the use of fair value accounting in an international context.
Resumo:
The purpose of this study is to investigate the accounting choice decisions of banks to employ Level 3 inputs in estimating the value of their financial assets and liabilities. Using a sample of 146 bank-year observations from 18 countries over 2009-2012, this study finds banks’ incentives to use Level 3 valuation inputs are associated with both firm-level and country-level determinants. At the firm-level, leverage, profitability (in term of net income), Tier 1 capital ratio, size and audit committee independence are associated with the percentage of Level 3 valuation inputs. At the country-level, economy development, legal region, legal enforcement and investor rights are also associated with the Level 3 classification choice. Lastly, ‘secrecy’, the proxy for culture dimensions and values, is found to be positively associated with the use of Level 3 valuation inputs. Altogether, these findings suggest that banks use the discretion available under Level 3 inputs opportunistically to avoid violating debt covenants limits, to increase earnings and manage their capital ratios. Results of this study also highlight that corporate governance quality at the firm-level (e.g. audit committee independence) and institutional features can constrain banks’ opportunistic behaviors in using the discretion available under Level 3 inputs. The results of this study have important implications for standard setters and contribute to the debate on the use of fair value accounting in an international context.