169 resultados para frequency of audit reports
Resumo:
Modernized GPS and GLONASS, together with new GNSS systems, BeiDou and Galileo, offer code and phase ranging signals in three or more carriers. Traditionally, dual-frequency code and/or phase GPS measurements are linearly combined to eliminate effects of ionosphere delays in various positioning and analysis. This typical treatment method has imitations in processing signals at three or more frequencies from more than one system and can be hardly adapted itself to cope with the booming of various receivers with a broad variety of singles. In this contribution, a generalized-positioning model that the navigation system independent and the carrier number unrelated is promoted, which is suitable for both single- and multi-sites data processing. For the synchronization of different signals, uncalibrated signal delays (USD) are more generally defined to compensate the signal specific offsets in code and phase signals respectively. In addition, the ionospheric delays are included in the parameterization with an elaborate consideration. Based on the analysis of the algebraic structures, this generalized-positioning model is further refined with a set of proper constrains to regularize the datum deficiency of the observation equation system. With this new model, uncalibrated signal delays (USD) and ionospheric delays are derived for both GPS and BeiDou with a large dada set. Numerical results demonstrate that, with a limited number of stations, the uncalibrated code delays (UCD) are determinate to a precision of about 0.1 ns for GPS and 0.4 ns for BeiDou signals, while the uncalibrated phase delays (UPD) for L1 and L2 are generated with 37 stations evenly distributed in China for GPS with a consistency of about 0.3 cycle. Extra experiments concerning the performance of this novel model in point positioning with mixed-frequencies of mixed-constellations is analyzed, in which the USD parameters are fixed with our generated values. The results are evaluated in terms of both positioning accuracy and convergence time.
Resumo:
Objectives To investigate the frequency of the ACTN3 R577X polymorphism in elite endurance triathletes, and whether ACTN3 R577X is significantly associated with performance time. Design Cross-sectional study. Methods Saliva samples, questionnaires, and performance times were collected for 196 elite endurance athletes who participated in the 2008 Kona Ironman championship triathlon. Athletes were of predominantly North American, European, and Australian origin. A one-way analysis of variance was conducted to compare performance times between genotype groups. Multiple linear regression analysis was performed to model the effect of questionnaire variables and genotype on performance time. Genotype and allele frequencies were compared to results from different populations using the chi-square test. Results Performance time did not significantly differ between genotype groups, and age, sex, and continent of origin were significant predictors of finishing time (age and sex: p < 5 × 10−6; continent: p = 0.003) though genotype was not. Genotype and allele frequencies obtained (RR 26.5%, RX 50.0%, XX 23.5%, R 51.5%, X 48.5%) were found to be not significantly different from Australian, Spanish, and Italian endurance athletes (p > 0.05), but were significantly different from Kenyan, Ethiopian, and Finnish endurance athletes (p < 0.01). Conclusions Genotype and allele frequencies agreed with those reported for endurance athletes of similar ethnic origin, supporting previous findings for an association between 577X allele and endurance. However, analysis of performance time suggests that ACTN3 does not alone influence endurance performance, or may have a complex effect on endurance performance due to a speed/endurance trade-off.
Resumo:
Benzodiazepines are widely prescribed to manage sleep disorders, anxiety and muscular tension. While providing short-term relief, continued use induces tolerance and withdrawal, and in older users, increases the risk of falls. However, long-term prescription remains common, and effective interventions are not widely available. This study developed a self-managed cognitive behaviour therapy package for cessation of benzodiazepine use delivered to participants via mail (M-CBT) and trialled its effectiveness as an adjunct to a general practitioner (GP)-managed dose reduction schedule. In the pilot trial, participants were randomly assigned to GP management with immediate or delayed M-CBT. Significant recruitment and engagement problems were experienced, and only three participants were allocated to each condition. After immediate M-CBT, two participants ceased use, while none receiving delayed treatment reduced daily intake by more than 50%. Across the sample, doses at 12 months remained significantly lower than baseline, and qualitative feedback from participants was positive. While M-CBT may have promise, improved engagement of GPs and participants is needed for this approach to substantially impact on community-wide benzodiazepine use.
Resumo:
Objectives To characterize and discover the determinants of the frequency of wear (FOW) of contact lenses. Methods Survey forms were sent to contact lens fitters in up to 40 countries between January and March every year for 5 consecutive years (2007–2011). Practitioners were asked to record data relating to the first 10 contact lens fits or refits performed after receiving the survey form. Only data for daily wear lens fits were analyzed. Results Data were collected in relation to 74,510 and 9,014 soft and rigid lens fits, respectively. Overall, FOW was 5.9±1.7 days per week (DPW). When considering the proportion of lenses worn between one to seven DPW, the distribution for rigid lenses is skewed toward full-time wear (7 DPW), whereas the distribution for soft daily disposable lenses is perhaps bimodal, with large and small peaks at seven and two DPW, respectively. There is a significant variation in FOW among nations (P<0.0001), ranging from 6.8±1.0 DPW in Greece to 5.1±2.5 DPW in Kuwait. For soft lenses, FOW increases with decreasing age. Females (6.0±1.6 DPW) wear lenses more frequently than males (5.8±1.7 DPW) (P=0.0002). FOW is greater among those wearing presbyopic corrections (6.1±1.4 DPW) compared with spherical (5.9±1.7 DPW) and toric (5.9±1.6 DPW) designs (P<0.0001). FOW with hydrogel peroxide systems (6.4±1.1 DPW) was greater than that with multipurpose systems (6.2±1.3 DPW) (P<0.0001). Conclusions Numerous demographic and contact lens–related factors impact FOW. There may be a future trend toward a lower FOW associated with the increasing popularity of daily disposable lenses.
Resumo:
Objective To evaluate the effects of Optical Character Recognition (OCR) on the automatic cancer classification of pathology reports. Method Scanned images of pathology reports were converted to electronic free-text using a commercial OCR system. A state-of-the-art cancer classification system, the Medical Text Extraction (MEDTEX) system, was used to automatically classify the OCR reports. Classifications produced by MEDTEX on the OCR versions of the reports were compared with the classification from a human amended version of the OCR reports. Results The employed OCR system was found to recognise scanned pathology reports with up to 99.12% character accuracy and up to 98.95% word accuracy. Errors in the OCR processing were found to minimally impact on the automatic classification of scanned pathology reports into notifiable groups. However, the impact of OCR errors is not negligible when considering the extraction of cancer notification items, such as primary site, histological type, etc. Conclusions The automatic cancer classification system used in this work, MEDTEX, has proven to be robust to errors produced by the acquisition of freetext pathology reports from scanned images through OCR software. However, issues emerge when considering the extraction of cancer notification items.
Resumo:
Objective: To develop a system for the automatic classification of pathology reports for Cancer Registry notifications. Method: A two pass approach is proposed to classify whether pathology reports are cancer notifiable or not. The first pass queries pathology HL7 messages for known report types that are received by the Queensland Cancer Registry (QCR), while the second pass aims to analyse the free text reports and identify those that are cancer notifiable. Cancer Registry business rules, natural language processing and symbolic reasoning using the SNOMED CT ontology were adopted in the system. Results: The system was developed on a corpus of 500 histology and cytology reports (with 47% notifiable reports) and evaluated on an independent set of 479 reports (with 52% notifiable reports). Results show that the system can reliably classify cancer notifiable reports with a sensitivity, specificity, and positive predicted value (PPV) of 0.99, 0.95, and 0.95, respectively for the development set, and 0.98, 0.96, and 0.96 for the evaluation set. High sensitivity can be achieved at a slight expense in specificity and PPV. Conclusion: The system demonstrates how medical free-text processing enables the classification of cancer notifiable pathology reports with high reliability for potential use by Cancer Registries and pathology laboratories.
Resumo:
Following an initial consultation draft (Turnbull 1999a), the Internal control Working Party of the Institute of Chartered Accountants in England and Wales, chaired by Nigel Turnbull, executive director of Rank Group plc. has published Internal Control: Guidance for Directors of Listed companies Incorporated in the UK (Turnbull, 1999b). The guidance is commonly referred to as the Turnbull Report. This paper outlines the key recommendations of the report and discusses some of its implications, particularly in the context of the increasing emphasis on a broader corporate governance role for audit committees. The paper suggests that the increasing role envisaged of audit committees for example lately in the UK by Turnbull, may generate undue expectations are premised on an unsubstantiated notion of the contribution of audit committees.
Resumo:
Arguments associated with the promotion of audit committees in many countries are premised on their potential for alleviating weaknesses in corporate governance. This paper provides a synthesis and evaluation of empirical research on the governance effects associated with audit committees. Given recent policy recommendations in several countries aimed at strengthening these committees, it is important to establish what research evidence demonstrates about their existing governance contribution. A framework for analyzing the impact of audit committees is described, identifying potential perceived effects which may have led to their adoption and documented effects on aspects of the audit function, on financial reporting quality and on corporate performance. It is argued that there is only limited and mixed evidence of effects to support claims and perceptions about the value of audit committees for these elements of governance. It is also shown that most of the existing research has focused on factors associated with audit committee existence, characteristics and measures of activity and there is very little evidence on the processes associated with the operation of audit committees and the manner in which they influence organizational behaviour. It is clear that there is no automatic relationship between the adoption of audit committee structures or characteristics and the achievement of particular governance effects, and caution may be needed over expectations that greater codification around factors such as audit committee members’ independence and expertise as the means of ‘‘correcting’’ past weaknesses in the arrangements for audit committees. The most fundamental question concerning what difference audit committees make in practice continues to be an important area for research development. For future research we suggest: (i) greater consideration of the organizational and institutional contexts in which audit committees operate; (ii) explicit theorization of the processes associated with audit committee operation; (iii) complementing extant research methods with field studie, and; (iv) investigation of unintended (behavioural) as well as expected consequences of audit committees.
Resumo:
This chapter provides a synthesis and evaluation of empirical research on the governance effects associated with audit committees. Given recent policy recommendations in several countries aimed at strengthening these committees, it is important to establish what research evidence demonstrates about their existing governance contribution. A framework for analyzing the impact of audit committees is described, identifying potential perceived effects which may have led to their adoption and documented effects on aspects of the audit function, on financial reporting quality and on corporate performance. It is also shown that most of the existing research has focused on factors associated with audit committee existence, characteristics, and measures of activity and there is very little evidence on the processes associated with the operation of audit committees and the manner in which they influence organizational behavior. It is clear that there is no automatic relationship between the adoption of audit committee structures or characteristics and the achievement of particular governance effects, and caution may be needed over expectations that greater codification around factors such as audit committee members’ independence and expertise as the means of ‘‘correcting’’ past weaknesses in the arrangements for audit committees. The most fundamental question concerning what difference audit committees make in practice continues to be an important area for research development. For future research we suggest: (1) greater consideration of the organizational and institutional contexts in which audit committees operate; (2) explicit theorization of the processes associated with audit committee operation; (3) complementing extant research methods with field studies; and (4) investigation of unintended as well as expected consequences of audit committees.
Resumo:
Purpose To investigate the frequency of convergence and accommodation anomalies in an optometric clinical setting in Mashhad, Iran, and to determine tests with highest accuracy in diagnosing these anomalies. Methods From 261 patients who came to the optometric clinics of Mashhad University of Medical Sciences during a month, 83 of them were included in the study based on the inclusion criteria. Near point of convergence (NPC), near and distance heterophoria, monocular and binocular accommodative facility (MAF and BAF, respectively), lag of accommodation, positive and negative fusional vergences (PFV and NFV, respectively), AC/A ratio, relative accommodation, and amplitude of accommodation (AA) were measured to diagnose the convergence and accommodation anomalies. The results were also compared between symptomatic and asymptomatic patients. The accuracy of these tests was explored using sensitivity (S), specificity (Sp), and positive and negative likelihood ratios (LR+, LR−). Results Mean age of the patients was 21.3 ± 3.5 years and 14.5% of them had specific binocular and accommodative symptoms. Convergence and accommodative anomalies were found in 19.3% of the patients; accommodative excess (4.8%) and convergence insufficiency (3.6%) were the most common accommodative and convergence disorders, respectively. Symptomatic patients showed lower values for BAF (p = .003), MAF (p = .001), as well as AA (p = .001) compared with asymptomatic patients. Moreover, BAF (S = 75%, Sp = 62%) and MAF (S = 62%, Sp = 89%) were the most accurate tests for detecting accommodative and convergence disorders in terms of both sensitivity and specificity. Conclusions Convergence and accommodative anomalies are the most common binocular disorders in optometric patients. Including tests of monocular and binocular accommodative facility in routine eye examinations as accurate tests to diagnose these anomalies requires further investigation.
Resumo:
Purpose: To establish whether there was a difference in health-related quality of life (HRQoL) in people with chronic musculoskeletal disorders (PwCMSKD) after participating in a multimodal physiotherapy program (MPP) either two or three sessions a week. Methods: Total of 114 PwCMSKD participated in this prospective randomised controlled trial. An individualised MPP, consisting of exercises for mobility, motor-control, muscle strengthening, cardiovascular training, and health education, was implemented either twice a week (G2: n = 58) or three times a week) (G3: n = 56) for 1 year. Outcomes: HRQoL physical and mental health state (PHS/MHS), Roland Morris disability Questionnaire (RMQ), Neck-Disability-Index (NDI) and Western Ontario and McMaster Universities’ Arthritis Index (WOMAC) were used to measure outcomes of MPP for people with chronic low back pain, chronic neck pain and osteoarthritis, respectively. Measures were taken at baseline, 8 weeks (8 w), 6 months (6 m), and 1 year (1 y) after starting the programme. Results: No statistically significant differences were found between the two groups (G2 and G3), except in NDI at 8 w (−3.34, (CI 95%: −6.94/0.84, p = 0.025 (scale 0–50)). All variables showed improvement reaching the following values (from baseline to 1 y) G2: PHS: 57.72 (baseline: 41.17; (improvement: 16.55%), MHS: 74.51 (baseline: 47.46, 27.05%), HRQoL 0.90 (baseline: 0.72, 18%)), HRQoL-VAS 84.29 (baseline: 58.04, 26.25%), RMQ 4.15 (baseline: 7.85, 15.42%), NDI 3.96 (baseline: 21.87, 35.82%), WOMAC 7.17 (baseline: 25.51, 19.10%). G3: PHS: 58.64 (baseline: 39.75, 18.89%), MHS: 75.50 (baseline: 45.45, (30.05%), HRQoL 0.67 (baseline: 0.88, 21%), HRQoL-VAS 86.91 (baseline: 52.64, 34.27%), RMQ 4.83 (baseline: 8.93, 17.08%), NDI 4.91 (baseline: 23.82, 37.82%), WOMAC 6.35 (baseline: 15.30, 9.32%). Conclusions: No significant differences between the two groups were found in the outcomes of a MPP except in the NDI at 8 weeks, but both groups improved in all variables during the course of 1 year under study.
Resumo:
While historically linked with psychoanalysis, countertransference is recognised as an important component of the experience of therapists, regardless of the therapeutic modality. This study considers the implications of this for the training of psychologists. Fifty-five clinical psychology trainees from four university training programmes completed an anonymous questionnaire that collected written reports of countertransference experiences, ratings of confidence in managing these responses, and supervision in this regard. The reports were analysed using a process of thematic analysis. Several themes emerged including a desire to protect or rescue clients, feeling criticised or controlled by clients, feeling helpless, and feeling disengaged. Trainees varied in their reports of awareness of countertransference and the regularity of supervision in this regard. The majority reported a lack of confidence in managing their responses, and all reported interest in learning about countertransference. The implications for reflective practice in postgraduate psychology training are discussed.
Resumo:
Background People admitted to intensive care units and those with chronic health care problems often require long-term vascular access. Central venous access devices (CVADs) are used for administering intravenous medications and blood sampling. CVADs are covered with a dressing and secured with an adhesive or adhesive tape to protect them from infection and reduce movement. Dressings are changed when they become soiled with blood or start to come away from the skin. Repeated removal and application of dressings can cause damage to the skin. The skin is an important barrier that protects the body against infection. Less frequent dressing changes may reduce skin damage, but it is unclear whether this practice affects the frequency of catheter-related infections. Objectives To assess the effect of the frequency of CVAD dressing changes on the incidence of catheter-related infections and other outcomes including pain and skin damage. Search methods In June 2015 we searched: The Cochrane Wounds Specialised Register; The Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library); Ovid MEDLINE; Ovid MEDLINE (In-Process & Other Non-Indexed Citations); Ovid EMBASE and EBSCO CINAHL. We also searched clinical trials registries for registered trials. There were no restrictions with respect to language, date of publication or study setting. Selection criteria All randomised controlled trials (RCTs) evaluating the effect of the frequency of CVAD dressing changes on the incidence of catheter-related infections on all patients in any healthcare setting. Data collection and analysis We used standard Cochrane review methodology. Two review authors independently assessed studies for inclusion, performed risk of bias assessment and data extraction. We undertook meta-analysis where appropriate or otherwise synthesised data descriptively when heterogeneous. Main results We included five RCTs (2277 participants) that compared different frequencies of CVAD dressing changes. The studies were all conducted in Europe and published between 1995 and 2009. Participants were recruited from the intensive care and cancer care departments of one children's and four adult hospitals. The studies used a variety of transparent dressings and compared a longer interval between dressing changes (5 to15 days; intervention) with a shorter interval between changes (2 to 5 days; control). In each study participants were followed up until the CVAD was removed or until discharge from ICU or hospital. - Confirmed catheter-related bloodstream infection (CRBSI) One trial randomised 995 people receiving central venous catheters to a longer or shorter interval between dressing changes and measured CRBSI. It is unclear whether there is a difference in the risk of CRBSI between people having long or short intervals between dressing changes (RR 1.42, 95% confidence interval (CI) 0.40 to 4.98) (low quality evidence). - Suspected catheter-related bloodstream infection Two trials randomised a total of 151 participants to longer or shorter dressing intervals and measured suspected CRBSI. It is unclear whether there is a difference in the risk of suspected CRBSI between people having long or short intervals between dressing changes (RR 0.70, 95% CI 0.23 to 2.10) (low quality evidence). - All cause mortality Three trials randomised a total of 896 participants to longer or shorter dressing intervals and measured all cause mortality. It is unclear whether there is a difference in the risk of death from any cause between people having long or short intervals between dressing changes (RR 1.06, 95% CI 0.90 to 1.25) (low quality evidence). - Catheter-site infection Two trials randomised a total of 371 participants to longer or shorter dressing intervals and measured catheter-site infection. It is unclear whether there is a difference in risk of catheter-site infection between people having long or short intervals between dressing changes (RR 1.07, 95% CI 0.71 to 1.63) (low quality evidence). - Skin damage One small trial (112 children) and three trials (1475 adults) measured skin damage. There was very low quality evidence for the effect of long intervals between dressing changes on skin damage compared with short intervals (children: RR of scoring ≥ 2 on the skin damage scale 0.33, 95% CI 0.16 to 0.68; data for adults not pooled). - Pain Two studies involving 193 participants measured pain. It is unclear if there is a difference between long and short interval dressing changes on pain during dressing removal (RR 0.80, 95% CI 0.46 to 1.38) (low quality evidence). Authors' conclusions The best available evidence is currently inconclusive regarding whether longer intervals between CVAD dressing changes are associated with more or less catheter-related infection, mortality or pain than shorter intervals.