801 resultados para Risk preferences
Resumo:
This paper reports research into teacher-‐librarians’ perceptions of using social media and Web 2.0 in teaching and learning. A pilot study was conducted with teacher-‐librarians in five government schools and five private schools in southeast Queensland. The findings revealed that there was a strong digital divide between government schools and private schools, with government schools suffering severe restrictions on the use of social media and Web 2.0, leading to an unsophisticated use of these technologies. It is argued that internet ‘over-‐ blocking’ may lead to government school students not being empowered to manage risks in an open internet environment. Furthermore, their use of information for academic and recreational learning may be compromised. This has implications particularly for low socioeconomic students, leading to further inequity in the process and outcomes of Australian education.
Resumo:
Background: Decreased ability to perform Activities of Daily Living (ADLs) during hospitalisation has negative consequences for patients and health service delivery. Objective: To develop an Index to stratify patients at lower and higher risk of a significant decline in ability to perform ADLs at discharge. Design: Prospective two cohort study comprising a derivation (n=389; mean age 82.3 years; SD� 7.1) and a validation cohort (n=153; mean age 81.5 years; SD� 6.1). Patients and setting: General medical patients aged = 70 years admitted to three university-affiliated acute care hospitals in Brisbane, Australia. Measurement and main results: The short ADL Scale was used to identify a significant decline in ability to perform ADLs from premorbid to discharge. In the derivation cohort, 77 patients (19.8%) experienced a significant decline. Four significant factors were identified for patients independent at baseline: 'requiring moderate assistance to being totally dependent on others with bathing'; 'difficulty understanding others (frequently or all the time)'; 'requiring moderate assistance to being totally dependent on others with performing housework'; a 'history of experiencing at least one fall in the previous 90 days prior to hospital admission' in addition to 'independent at baseline', which was protective against decline at discharge. 'Difficulty understanding others (frequently or all the time)' and 'requiring moderate assistance to being totally dependent on others with performing housework' were also predictors for patients dependent in ADLs at baseline. Sensitivity, specificity, Positive Predictive Value (PPV), and Negative Predictive Value (NPV) of the DADLD dichotomised risk scores were: 83.1% (95% CI 72.8; 90.7); 60.5% (95% CI 54.8; 65.9); 34.2% (95% CI 27.5; 41.5); 93.5% (95% CI 89.2; 96.5). In the validation cohort, 47 patients (30.7%) experienced a significant decline. Sensitivity, specificity, PPV and NPV of the DADLD were: 78.7% (95% CI 64.3; 89.3); 69.8% (95% CI 60.1, 78.3); 53.6% (95% CI 41.2; 65.7); 88.1% (95% CI 79.2; 94.1). Conclusions: The DADLD Index is a useful tool for identifying patients at higher risk of decline in ability to perform ADLs at discharge.
Resumo:
This paper describes a study designed to understand player responses to artificially intelligent opponents in multi-player First Person Shooter games. It examines the player's ability to tell the difference between artificially intelligent opponents and other human players, and investigates the players' perceptions of these opponents. The study examines player preferences in this regard and identifies the significance of the cues and signs players use to categorise an opponent as artificial or human.
Resumo:
One of the primary desired capabilities of any future air traffic separation management system is the ability to provide early conflict detection and resolution effectively and efficiently. In this paper, we consider the risk of conflict as a primary measurement to be used for early conflict detection. This paper focuses on developing a novel approach to assess the impact of different measurement uncertainty models on the estimated risk of conflict. The measurement uncertainty model can be used to represent different sensor accuracy and sensor choices. Our study demonstrates the value of modelling measurement uncertainty in the conflict risk estimation problem and presents techniques providing a means of assessing sensor requirements to achieve desired conflict detection performance.
Resumo:
Background: Critically ill patients are at high risk for pressure ulcer (PrU) development due to their high acuity and the invasive nature of the multiple interventions and therapies they receive. With reported incidence rates of PrU development in the adult critical care population as high as 56%, the identification of patients at high risk of PrU development is essential. This paper will explore the association between PrU development and risk factors. It will also explore PrU development and the use of risk assessment scales for critically ill patients in adult intensive care units. Method: A literature search from 2000 to 2012 using the CINHAL, Cochrane Library, EBSCOHost, Medline (via EBSCOHost), PubMed, ProQuest and Google Scholar databases was conducted. Key words used were: pressure ulcer/s; pressure sore/s; decubitus ulcer/s; bed sore/s; critical care; intensive care; critical illness; prevalence; incidence; prevention; management; risk factor; risk assessment scale. Results: Nineteen articles were included in this review; eight studies addressing PrU risk factors, eight studies addressing risk assessment scales and three studies overlapping both. Results from the studies reviewed identified 28 intrinsic and extrinsic risk factors which may lead to PrU development. Development of a risk factor prediction model in this patient population, although beneficial, appears problematic due to many issues such as diverse diagnoses and subsequent patient needs. Additionally, several risk assessment instruments have been developed for early screening of patients at higher risk of developing PrU in the ICU. No existing risk assessment scales are valid for identification high risk critically ill patient,with the majority of scales potentially over-predicting patients at risk for PrU development. Conclusion: Research studies to inform the risk factors for potential pressure ulcer development are inconsistent. Additionally, there is no consistent or clear evidence which demonstrates any scale to better or more effective than another when used to identify the patients at risk for PrU development. Furthermore robust research is needed to identify the risk factors and develop valid scales for measuring the risk of PrU development in ICU.
Resumo:
Motivated by growing considerations of the scale, severity and risks associated with human exposure to indoor particulate matter, this work reviewed existing literature to: (i) identify state-of-the-art experimental techniques used for personal exposure assessment; (ii) compare exposure levels reported for domestic/school settings in different countries (excluding exposure to environmental tobacco smoke and particulate matter from biomass cooking in developing countries); (iii) assess the contribution of outdoor background vs indoor sources to personal exposure; and (iv) examine scientific understanding of the risks posed by personal exposure to indoor aerosols. Limited studies assessing integrated daily residential exposure to just one particle size fraction, ultrafine particles, show that the contribution of indoor sources ranged from 19-76%. This indicates a strong dependence on resident activities, source events and site specificity, and highlights the importance of indoor sources for total personal exposure. Further, it was assessed that 10-30% of the total burden-of-disease from particulate matter exposure was due to indoor generated particles, signifying that indoor environments are likely to be a dominant environmental factor affecting human health. However, due to challenges associated with conducting epidemiological assessments, the role of indoor generated particles has not been fully acknowledged, and improved exposure/risk assessment methods are still needed, together with a serious focus on exposure control.
Resumo:
Vitamin D may have anti-skin cancer effects, but population-based evidence is lacking. We therefore assessed associations between vitamin D status and skin cancer risk in an Australian subtropical community. We analyzed prospective skin cancer incidence for 11 years following baseline assessment of serum 25(OH)-vitamin D in 1,191 adults (average age 54 years) and used multivariable logistic regression analysis to adjust risk estimates for age, sex, detailed assessments of usual time spent outdoors, phenotypic characteristics, and other possible confounders. Participants with serum 25(OH)-vitamin D concentrations above 75 nmol l(-1) versus those below 75 nmol l(-1) more often developed basal cell carcinoma (odds ratio (OR)=1.51 (95% confidence interval (CI): 1.10-2.07, P=0.01) and melanoma (OR=2.71 (95% CI: 0.98-7.48, P=0.05)). Squamous cell carcinoma incidence tended to be lower in persons with serum 25(OH)-vitamin D concentrations above 75 nmol l(-1) compared with those below 75 nmol l(-1) (OR=0.67 (95% CI: 0.44-1.03, P=0.07)). Vitamin D status was not associated with skin cancer incidence when participants were classified as above or below 50 nmol l(-1) 25(OH)-vitamin D. Our findings do not indicate that the carcinogenicity of high sun exposure can be counteracted by high vitamin D status. High sun exposure is to be avoided as a means to achieve high vitamin D status.
Resumo:
The risk of vitamin D insufficiency is increased in persons having limited sunlight exposure and dietary vitamin D. Supplementation compliance might be improved with larger doses taken less often, but this may increase the potential for side effects. The objective of the present study was to determine whether a weekly or weekly/monthly regimen of vitamin D supplementation is as effective as daily supplementation without increasing the risk of side effects. Participants were forty-eight healthy adults who were randomly assigned for 3 months to placebo or one of three supplementation regimens: 50 μg/d (2000 IU/d, analysed dose 70 μg/d), 250 μg/week (10 000 IU/week, analysed dose 331 μg/week) or 1250 μg/week (50 000 IU/week, analysed dose 1544 μg/week) for 4 weeks and then 1250 μg/month for 2 months. Daily and weekly doses were equally effective at increasing serum 25-hydroxyvitamin D, which was significantly greater than baseline in all the supplemented groups after 30 d of treatment. Subjects in the 1250 μg treatment group, who had a BMI >26 kg/m2, had a steady increase in urinary Ca in the first 3 weeks of supplementation, and, overall, the relative risk of hypercalciuria was higher in the 1250 μg group than in the placebo group (P= 0·01). Although vitamin D supplementation remains a controversial issue, these data document that supplementing with ≤ 250 μg/week ( ≤ 10 000 IU/week) can improve or maintain vitamin D status in healthy populations without the risk of hypercalciuria, but 24 h urinary Ca excretion should be evaluated in healthy persons receiving vitamin D3 supplementation in weekly single doses of 1250 μg (50 000 IU).
Resumo:
Aims: This paper describes the development of a risk adjustment (RA) model predictive of individual lesion treatment failure in percutaneous coronary interventions (PCI) for use in a quality monitoring and improvement program. Methods and results: Prospectively collected data for 3972 consecutive revascularisation procedures (5601 lesions) performed between January 2003 and September 2011 were studied. Data on procedures to September 2009 (n = 3100) were used to identify factors predictive of lesion treatment failure. Factors identified included lesion risk class (p < 0.001), occlusion type (p < 0.001), patient age (p = 0.001), vessel system (p < 0.04), vessel diameter (p < 0.001), unstable angina (p = 0.003) and presence of major cardiac risk factors (p = 0.01). A Bayesian RA model was built using these factors with predictive performance of the model tested on the remaining procedures (area under the receiver operating curve: 0.765, Hosmer–Lemeshow p value: 0.11). Cumulative sum, exponentially weighted moving average and funnel plots were constructed using the RA model and subjectively evaluated. Conclusion: A RA model was developed and applied to SPC monitoring for lesion failure in a PCI database. If linked to appropriate quality improvement governance response protocols, SPC using this RA tool might improve quality control and risk management by identifying variation in performance based on a comparison of observed and expected outcomes.
Resumo:
Scarcity of large parcels of land in well-serviced areas has motivated people to re-develop brownfield land. Most of brownfield land has high risk of contamination from wide range of industrial activities such as gas works, factories, railway land and waste tips. In addition, people who live in brownfield re-development areas may be exposed to health hazards. This paper discusses public perceptions on the brownfield sites and also the risk and mitigation strategy to promote brownfield re-development. Data is gathered from face to face survey of fifty respondents who work in Brisbane Central Business District (CBD) and interview with an expert on remediation of contaminated land. From this preliminary study, it is found that majority of the population are not aware of any brownfield sites near their residence and those who are aware showed very little concern on their proximity to the site. Further discussion on the paper based on a simple cross tabulation analysis. The main risk mitigation strategy of re-development of brownfield site is by updating the registration through Environmental Management Register (EMR) and Contaminated Land Register (CLR). In addition, insurance may offer to cover cost overruns on remediation cost.
Resumo:
This paper examines the role of compensation and risk committees in managing and monitoring the risk behaviour of Australian financial firms in the period leading up to the global financial crisis (2006–2008). This empirical study of 711 observations of financial sector firms demonstrates how the coordination of risk management and compensation committees reduces information asymmetry. The study shows that the composition of the risk and compensation committees is positively associated with risk, which, in turn, is associated with firm performance. More importantly, information asymmetry is reduced when a director is a member of both the risk and compensation committees which moderate the negative association between risk and firm performance for firms with high risk.
Resumo:
Regarded as a normative component of development, risk-taking by young people is a well-researched subject, and some risk-taking behaviours, such as substance use, are particularly well covered because of their potential to adversely affect health and wellbeing. What has remained unclear is the extent of young people's risk-taking while engaged in alcohol and other drug (AOD) treatment, their awareness of the related harms of risk-taking behaviours, and their prior help-seeking for these harms - information which may have a significant impact on the quality and relevance of the care they receive. This paper reports the findings from a brief pilot study exploring those factors in a clinical sample of young people engaged in ongoing AOD counselling.
Resumo:
Cooperative Systems provide, through the multiplication of information sources over the road, a lot of potential to improve the assessment of the road risk describing a particular driving situation. In this paper, we compare the performance of a cooperative risk assessment approach against a non-cooperative approach; we used an advanced simulation framework, allowing for accurate and detailed, close-to-reality simulations. Risk is estimated, in both cases, with combinations of indicators based on the TTC. For the non-cooperative approach, vehicles are equipped only with an AAC-like forward-facing ranging sensor. On the other hand, for the cooperative approach, vehicles share information through 802.11p IVC and create an augmented map representing their environment; risk indicators are then extracted from this map. Our system shows that the cooperative risk assessment provides a systematic increase of forward warning to most of the vehicles involved in a freeway emergency braking scenario, compared to a non-cooperative system.
Resumo:
This research has been conducted to ascertain whether people with certain personality types exhibit preferences for particular game genres. Four hundred and sixty-six participants completed an online survey in which they described their preference for various game genres and provided measures of personality. Personality types were measured using the five-factor model of personality. Significant relationships between personality types and game genres were found. The results are interpreted in the context of the features of particular game genres and possible matches between personality traits and these features.