542 resultados para Time study
Resumo:
In this study of 638 Australian nurses, compliance to hand hygiene (HH), as defined by the “five moments” recommended by the World Health Organisation (2009), was examined. Hypotheses focused on the extent to which time pressure reduces compliance and safety climate (operationalised in relation to HH using colleagues, manager, and hospital as referents) increases compliance. It also was proposed that HH climate would interact with time pressure, such that the negative effects of time pressure would be less marked when HH climate is high. The extent to which the three HH climate variables would interact among each other, either in the form of boosting or compensatory effects, was tested in an exploratory manner. A prospective research design was used in which time pressure and the HH climate variables were assessed at Time 1 and compliance was assessed by self-report two weeks later. Compliance was high but varied significantly across the 5 HH Moments, suggesting that nurses make distinctions between inherent and elective HH and also seemed to engage in some implicit rationing of HH. Time pressure dominated the utility of HH climate to have its positive impact on compliance. The most conducive workplace for compliance was one low in time pressure and high in HH climate. Colleagues were very influential in determining compliance, more so than the manager and hospital. Manager and hospital support for HH enhanced the positive effects of colleagues on compliance. Providing training and enhancing knowledge was important, not just for compliance, but for safety climate.
Resumo:
BACKGROUND Hamstring strain injuries (HSIs) represent the most common cause of lost playing time in rugby union. Eccentric knee-flexor weakness and between-limb imbalance in eccentric knee-flexor strength are associated with a heightened risk of hamstring injury in other sports; however these variables have not been explored in rugby union. PURPOSE To determine if lower levels of eccentric knee-flexor strength or greater between-limb imbalance in this parameter during the Nordic hamstring exercise are risk-factors for hamstring strain injury in rugby union. STUDY DESIGN Cohort study; level of evidence, 3. METHODS This prospective study was conducted over the 2014 Super Rugby and Queensland Rugby Union seasons. In total, 178 rugby union players (age, 22.6 ± 3.8 years; height, 185 ± 6.8 cm; mass, 96.5 ± 13.1 kg) had their eccentric knee-flexor strength assessed using a custom-made device during the pre-season. Reports of previous hamstring, quadriceps, groin, calf and anterior cruciate ligament injury were also obtained. The main outcome measure was prospective occurrence of hamstring strain injury. RESULTS Twenty players suffered at least one hamstring strain during the study period. Players with a history of hamstring strain injury had 4.1 fold (RR = 4.1, 95% CI = 1.9 to 8.9, p = 0.001) greater risk of subsequent hamstring injury than players without such history. Between-limb imbalance in eccentric knee-flexor strength of ≥ 15% and ≥ 20% increased the risk of hamstring strain injury 2.4 fold (RR = 2.4, 95% CI = 1.1 to 5.5, p = 0.033) and 3.4 fold (RR = 3.4, 95% CI = 1.5 to 7.6, p = 0.003), respectively. Lower eccentric knee flexor strength and other prior injuries were not associated with increased risk of future hamstring strain. Multivariate logistic regression revealed that the risk of re-injury was augmented in players with strength imbalances. CONCLUSION Previous hamstring strain injury and between-limb imbalance in eccentric knee-flexor strength were associated with an increased risk of future hamstring strain injury in rugby union. These results support the rationale for reducing imbalance, particularly in players who have suffered a prior hamstring injury, to mitigate the risk of future injury.
Resumo:
The design-build (DB) delivery method has been widely used in the United States due to its reputed superior cost and time performance. However, rigorous studies have produced inconclusive support and only in terms of overall results, with few attempts being made to relate project characteristics with performance levels. This paper provides a larger and more finely grained analysis of a set of 418 DB projects from the online project database of the Design-Build Institute of America (DBIA), in terms of the time-overrun rate (TOR), early start rate (ESR), early completion rate (ECR) and cost overrun rate (COR) associated with project type (e.g., commercial/institutional buildings and civil infrastructure projects), owners (e.g., Department of Defense and private corporations), procurement methods (e.g., ‘best value with discussion’ and qualifications-based selection), contract methods (e.g., lump sum and GMP) and LEED levels (e.g., gold and silver). The results show ‘best value with discussion’ to be the dominant procurement method and lump sum the most frequently used contract method. The DB method provides relatively good time performance, with more than 75% of DB projects completed on time or before schedule. However, with more than 50% of DB projects cost overrunning, the DB advantage of cost saving remains uncertain. ANOVA tests indicate that DB projects within different procurement methods have significantly different time performance and that different owner types and contract methods significantly affect cost performance. In addition to contributing to empirical knowledge concerning the cost and time performance of DB projects with new solid evidence from a large sample size, the findings and practical implications of this study are beneficial to owners in understanding the likely schedule and budget implications involved for their particular project characteristics.
Resumo:
Light gauge steel frame (LSF) floor systems are generally made of lipped channel section joists and lined with gypsum plasterboards to provide adequate fire resistance rating under fire conditions. Recently a new LSF floor system made of welded hollow flange channel (HFC) section was developed and its fire performance was investigated using full scale fire tests. The new floor systems gave higher fire resistance ratings in comparison to conventional LSF floor systems. To avoid expensive and time consuming full scale fire tests, finite element analyses were also performed to simulate the fire performance of LSF floors made of HFC joists using both steady and transient state methods. This paper presents the details of the developed finite element models of HFC joists to simulate the structural fire performance of the LSF floor systems under standard fire conditions. Finite element analyses were performed using the measured time–temperature profiles of the failed joists from the fire tests, and their failure times, temperatures and modes, and deflection versus time curves were obtained. The developed finite element models successfully predicted the structural performance of LSF floors made of HFC joists under fire conditions. They were able to simulate the complex behaviour of thin cold-formed steel joists subjected to non-uniform temperature distributions, and local buckling and yielding effects. This study also confirmed the superior fire performance of the newly developed LSF floors made of HFC joists.
Resumo:
Fire safety plays a vital role in building design because appropriate level of fire safety is important to safeguard lives and property. Cold-formed steel channel sections along with fire-resistive plasterboards are used to construct light-gauge steel frame (LSF) floor systems to provide adequate fire resistance ratings (FRR). It is common practice to use lipped channel sections (LCS) as joists in LSF floor systems, and past research has only considered such systems. This research focuses on adopting improved joist sections such as hollow flange channel (HFC) sections to improve the structural performance and FRR of cold-formed LSF floor systems under standard fire conditions. The structural and thermal performances of LSF floor systems made of a welded HFC, LiteSteel Beams (LSB), with different plasterboard and insulation configurations, were investigated using four full-scale fire tests under standard fires. These fire tests showed that the new LSF floor system with LSB joists improved the FRR in comparison to that of conventional LCS joists. Fire tests have provided valuable structural and thermal performance data of tested floor systems that included time-temperature profiles and failure times, temperatures, and modes. This paper presents the details of the fire tests conducted in this study and their results along with some important findings.
Resumo:
The climate in the Arctic is changing faster than anywhere else on earth. Poorly understood feedback processes relating to Arctic clouds and aerosol–cloud interactions contribute to a poor understanding of the present changes in the Arctic climate system, and also to a large spread in projections of future climate in the Arctic. The problem is exacerbated by the paucity of research-quality observations in the central Arctic. Improved formulations in climate models require such observations, which can only come from measurements in situ in this difficult-to-reach region with logistically demanding environmental conditions. The Arctic Summer Cloud Ocean Study (ASCOS) was the most extensive central Arctic Ocean expedition with an atmospheric focus during the International Polar Year (IPY) 2007–2008. ASCOS focused on the study of the formation and life cycle of low-level Arctic clouds. ASCOS departed from Longyearbyen on Svalbard on 2 August and returned on 9 September 2008. In transit into and out of the pack ice, four short research stations were undertaken in the Fram Strait: two in open water and two in the marginal ice zone. After traversing the pack ice northward, an ice camp was set up on 12 August at 87°21' N, 01°29' W and remained in operation through 1 September, drifting with the ice. During this time, extensive measurements were taken of atmospheric gas and particle chemistry and physics, mesoscale and boundary-layer meteorology, marine biology and chemistry, and upper ocean physics. ASCOS provides a unique interdisciplinary data set for development and testing of new hypotheses on cloud processes, their interactions with the sea ice and ocean and associated physical, chemical, and biological processes and interactions. For example, the first-ever quantitative observation of bubbles in Arctic leads, combined with the unique discovery of marine organic material, polymer gels with an origin in the ocean, inside cloud droplets suggests the possibility of primary marine organically derived cloud condensation nuclei in Arctic stratocumulus clouds. Direct observations of surface fluxes of aerosols could, however, not explain observed variability in aerosol concentrations, and the balance between local and remote aerosols sources remains open. Lack of cloud condensation nuclei (CCN) was at times a controlling factor in low-level cloud formation, and hence for the impact of clouds on the surface energy budget. ASCOS provided detailed measurements of the surface energy balance from late summer melt into the initial autumn freeze-up, and documented the effects of clouds and storms on the surface energy balance during this transition. In addition to such process-level studies, the unique, independent ASCOS data set can and is being used for validation of satellite retrievals, operational models, and reanalysis data sets.
Resumo:
AIM: The purpose of this pilot study was to introduce knee alignment as a potential predictor of sedentary activity levels in boys and girls. METHODS: Dual energy x-ray absorptiometry (DXA) and anthropometric assessment were conducted on 47 children (21 boys and 26 girls; 5-14 y) and their gender-matched parent. Body Mass Index (BMI) and abdominal-to-height ratio were calculated. Lower extremity alignment was determined by anatomic tibiofemoral angle (TFA) measurements from DXA images. Time spent in moderate-to-vigorous physical activity and sedentary activities were obtained from a parent-reported questionnaire. Stepwise multiple regression analyses identified anthropometric, musculoskeletal, and activity factors of parents and children for predicting total time spent in sedentary behaviour. RESULTS: Weight, total sedentary time of parents and TFA are moderate predictors of sedentary behaviour in children (R2=0.469). When stratifying for gender, TFA and total sedentary time of the parent, as well as waist circumference, are the most useful predictors of sedentary behaviour in boys (R2=0.648). However, weight is the only predictor of sedentary behaviour in girls (R2=0.479). CONCLUSION: Negative associations between TFA and sedentary behaviour indicate that even slight variations in musculoskeletal alignment may influence a child's motivation to be physically active. Although growth and development is complicated by many potentialities, this pilot study suggests that orthopaedic factors should also be considered when evaluating physical activity in children
Resumo:
It is difficult to determine sulfur-containing volatile organic compounds in the atmosphere because of their reactivity. Primary off-line techniques may suffer losses of analytes during the transportation from field to laboratory and sample preparation. In this study, a novel method was developed to directly measure dimethyl sulfide at parts-per-billion concentration levels in the atmosphere using vacuum ultraviolet single photon ionization time-of-flight mass spectrometry. This technique offers continuous sampling at a response rate of one measurement per second, or cumulative measurements over longer time periods. Laboratory prepared samples of different concentrations of dimethyl sulfide in pure nitrogen gas were analyzed at several sampling frequencies. Good precision was achieved using sampling periods of at least 60 seconds with a relative standard deviation of less than 25%. The detection limit for dimethyl sulfide was below the 3 ppb olfactory threshold. These results demonstrate that single photon ionization time-of-flight mass spectrometry is a valuable tool for rapid, real-time measurements of sulfur-containing organic compounds in the air.
Resumo:
Background Few studies have been undertaken to understand the employment impact in patients with colorectal cancer and none in middle-aged individuals with cancer. This study described transitions in, and key factors influencing, work participation during the 12 months following a diagnosis of colorectal cancer. Methods We enrolled 239 adults during 2010 and 2011who were employed at the time of their colorectal cancer diagnosis and were prospectively followed over 12 months. They were compared to an age- and gender-matched general population group of 717 adults from the Household, Income and Labour Dynamics in Australia (HILDA) Survey. Data were collected using telephone and postal surveys. Primary outcomes included work participation at 12 months, changes in hours worked and time to work re-entry. Multivariable logistic and Cox proportional hazards models were undertaken. Results A significantly higher proportion of participants with colorectal cancer (27%) had stopped working at 12 months than participants from the comparison group (8%) (p < 0.001). Participants with cancer who returned to work took a median of 91 days off work (25–75 percentiles: 14–183 days). For participants with cancer, predictors of not working at 12 months included: being older, lower BMI and lower physical well-being. Factors related to delayed work re-entry included not being university-educated, working for an employer with more than 20 employees in a non-professional or managerial role, longer hospital stay, poorer perceived financial status and having or had chemotherapy. Conclusions In middle-adulthood, those working and diagnosed with colorectal cancer can expect to take around three months off work. Individuals treated with chemotherapy, without a university degree and from large employers could be targeted for specific assistance for a more timely work entry.
Resumo:
Background An increasing body of evidence associates a high level of sitting time with poor health outcomes. The benefits of moderate to vigorous-intensity physical activities to various aspects of health are now well documented; however, individuals may engage in moderate-intensity physical activity for at least 30 minutes on five or more days of the week and still exhibit a high level of sitting time. This purpose of this study was to examine differences in total wellness among adults relative to high/low levels of sitting time combined with insufficient/sufficient physical activity (PA). The construct of total wellness incorporates a holistic approach to the body, mind and spirit components of life, an approach which may be more encompassing than some definitions of health. Methods Data were obtained from 226 adult respondents (27 ± 6 years), including 116 (51%) males and 110 (49%) females. Total PA and total sitting time were assessed with the International Physical Activity Questionnaire (IPAQ) (short-version). The Wellness Evaluation of Lifestyle Inventory was used to assess total wellness. An analysis of covariance (ANCOVA) was utilised to assess the effects of the sitting time/physical activity group on total wellness. A covariate was included to partial out the effects of age, sex and work status (student or employed). Cross-tabulations were used to show associations between the IPAQ derived high/low levels of sitting time with insufficient/sufficient PA and the three total wellness groups (i.e. high level of wellness, moderate wellness and wellness development needed). Results The majority of the participants were located in the high total sitting time and sufficient PA group. There were statistical differences among the IPAQ groups for total wellness [F (2,220) = 32.5 (p <0.001)]. A Chi-square test revealed a significant difference in the distribution of the IPAQ categories within the classification of wellness [χ2 (N = 226) = 54.5, p < .001]. One-hundred percent (100%) of participants who self-rated as high total sitting time/insufficient PA were found in the wellness development needed group. In contrast, 72% of participants who were located in the low total sitting time/sufficient PA group were situated in the moderate wellness group. Conclusion Many participants who meet the physical activity guidelines, in this sample, sit for longer periods of time than the median Australian sitting time. An understanding of the effects of the enhanced PA and reduced sitting time on total wellness can add to the development of public health initiatives. Keywords: IPAQ; The Wellness Evaluation of Lifestyle (WEL); Sedentary lifestyle
Resumo:
Objective This study explores the spatiotemporal variations of suicide across Australia from 1986 to 2005, discusses the reasons for dynamic changes, and considers future suicide research and prevention strategies. Design Suicide (1986–2005) and population data were obtained from the Australian Bureau of Statistics. A series of analyses were conducted to examine the suicide pattern by sex, method and age group over time and geography. Results Differences in suicide rates across sex, age groups and suicide methods were found across geographical areas. Male suicides were mainly completed by hanging, firearms, gases and self-poisoning. Female suicides were primarily completed by hanging and self-poisoning. Suicide rates were higher in rural areas than in urban areas (capital cities and regional centres). Suicide rates by firearms were higher in rural areas than in urban areas, while the pattern for self-poisoning showed the reverse trend. Suicide rates had relatively stable trend for the total population and those aged between 15 and 54, while suicide decreased among 55 years and over during the study period. There was a decrease in suicides by firearms during the study period especially after 1996 when a new firearm control law was implemented, while suicide by hanging continued to increase. Areas with a high proportion of indigenous population (eg, northwest of Queensland and top north of the Northern Territory) had shown a substantial increase in suicide incidence after 1995. Conclusions Suicide rates varied over time and space and across sexes, age groups and suicide methods. This study provides detailed patterns of suicide to inform suicide control and prevention strategies for specific subgroups and areas of high and increased risk.
Resumo:
Background Multilevel and spatial models are being increasingly used to obtain substantive information on area-level inequalities in cancer survival. Multilevel models assume independent geographical areas, whereas spatial models explicitly incorporate geographical correlation, often via a conditional autoregressive prior. However the relative merits of these methods for large population-based studies have not been explored. Using a case-study approach, we report on the implications of using multilevel and spatial survival models to study geographical inequalities in all-cause survival. Methods Multilevel discrete-time and Bayesian spatial survival models were used to study geographical inequalities in all-cause survival for a population-based colorectal cancer cohort of 22,727 cases aged 20–84 years diagnosed during 1997–2007 from Queensland, Australia. Results Both approaches were viable on this large dataset, and produced similar estimates of the fixed effects. After adding area-level covariates, the between-area variability in survival using multilevel discrete-time models was no longer significant. Spatial inequalities in survival were also markedly reduced after adjusting for aggregated area-level covariates. Only the multilevel approach however, provided an estimation of the contribution of geographical variation to the total variation in survival between individual patients. Conclusions With little difference observed between the two approaches in the estimation of fixed effects, multilevel models should be favored if there is a clear hierarchical data structure and measuring the independent impact of individual- and area-level effects on survival differences is of primary interest. Bayesian spatial analyses may be preferred if spatial correlation between areas is important and if the priority is to assess small-area variations in survival and map spatial patterns. Both approaches can be readily fitted to geographically enabled survival data from international settings
Resumo:
The thick package of ~2.7 Ga mafic and ultramafic lavas and intrusions preserved among the Neoarchean of the Kalgoorlie Terrene in Western Australia provides valuable insight into geological processes controlling the most prodigious episode of growth and preservation of juvenile continental crust in Earth’s history. Limited exposure of these rocks results in uncertainty about their age, physical and chemical characteristics, and stratigraphic relationships. This in turn prevents confident correlation of regional occurrences of mafic and ultramafic successions (both intrusive and extrusive) and hinders the interpretation of tectonic setting and magmatic evolution. A recent stratigraphic drilling program of the Neoarchean stratigraphy of the Agnew Greenstone Belt in Western Australia has provided continuous exposures through a c. 7 km thick sequence of mafic and ultramafic units. In this study, we present a volcanological, lithogeochemical and chronological study of the Agnew Greenstone Belt, and provide the first pre-2690 Ma regional correlation across the Kalgoorlie Terrane. The Agnew Greenstone Belt records ~30 m.y. of episodic ultramafic-mafic magmatism that includes two cycles, each defined by a komatiite that is overlain by units that become more evolved and contaminated with time. The sequence is divided into nine conformable packages, each consisting of stacked subaqueous lava flows and comagmatic intrusions, as well as two sills without associated extrusions. Lavas, with the exception of intercalations between two units, form a layer-cake stratigraphy and were likely erupted from a system of fissures tapping the same magma source. The komatiites are not contaminated by continental crust ([La/Sm]PM ~0.7) and are of the Al-undepleted Munro-type. Crustal contamination is evident in many units (Songvang Basalt, Never Can Tell Basalt, Redeemer Basalt, and Turrett Dolerite), as judged by [La/Sm]>1, negative Nb and Ti anomalies, and geochemical mixing trends towards felsic contaminants. Crystal fractionation was also significant, with early olivine and chromite (Mg#>65) followed by plagioclase and clinopyroxene removal (Mg<65), and in the most evolved case, titanomagnetite accumulation. Three new TIMS dates on granophyric zones of mafic sills and one ICP-MS date from an interflow felsic tuff are presented and used for regional stratigraphic correlation. Cycle I magmatism began at ~2720 Ma and ended ~2705 Ma, whereas cycle II began ~2705 Ma and ended at 2690.7±1.2 Ma. Regional correlations indicate the western Kalgoorlie Terrane consists of a remarkably similar stratigraphy that can be recognised at Agnew, Ora Banda and Coolgardie, whereas the eastern part of the terrane (e.g., Kambalda Domain) does not include cycle I, but correlates well with cycle II. This research supports an autochthonous model of greenstone formation, in which one large igneous province, represented by two complete cycles, is constructed on sialic crust. New stratigraphic correlations for the Kalgoorlie Terrane indicate that many units can be traced over distances >100 km, which has implications for exploration targeting for stratigraphically hosted ultramafic Ni and VMS deposits.
Resumo:
Background Optimal infant nutrition comprises exclusive breastfeeding, with complementary foods introduced from six months of age. How parents make decisions regarding this is poorly studied. This study begins to address the dearth of research into the decision-making processes used by first-time mothers relating to the introduction of complementary foods. Methods This qualitative explorative study was conducted using interviews (13) and focus groups (3). A semi-structured interview guide based on the Theory of Planned Behaviour (TPB). The TPB, a well-validated decision-making model, identifies the key determinants of a behaviour through behavioural beliefs, subjective norms, and perceived behavioural control over the behaviour. It is purported that these beliefs predict behavioural intention to perform the behaviour, and performing the behaviour. A purposive, convenience, sample of 21 metropolitan parents recruited through advertising at local playgroups and childcare centres, and electronically through the University community email list self-selected to participate. Data were analysed thematically within the theoretical constructs: behavioural beliefs, subjective norms and perceived behavioural control. Data relating to sources of information about the introduction of complementary foods were also collected. Results Overall, first-time mothers found that waiting until six months was challenging despite knowledge of the WHO recommendations and an initial desire to comply with this guideline. Beliefs that complementary foods would assist the infants' weight gain, sleeping patterns and enjoyment at meal times were identified. Barriers preventing parents complying with the recommendations included subjective and group norms, peer influences, infant cues indicating early readiness and food labelling inconsistencies. The most valued information source was from peers who had recently introduced complementary foods. Conclusions First-time mothers in this study did not demonstrate a good understanding of the rationale behind the WHO recommendations, nor did they understand fully the signs of readiness of infants to commence solid foods. Factors that assisted waiting until six months were a trusting relationship with a health professional whose practice and advice was consistent with the recommendations and/or when their infant was developmentally ready for complementary foods at six months and accepted them with ease and enthusiasm. Barriers preventing parents complying with the recommendations included subjective and group norms, peer influences, infant cues indicating early readiness and food labelling inconsistencies.
Resumo:
Current Australian policies and curricular frameworks demand that teachers and students use technology creatively and meaningfully in classrooms to develop students into 21C technological citizens. English teachers and students also have to learn new metalanguage around visual grammar since multimodal tasks often combine creative with critical General Capabilities (GC) with that of the of ICTs and literacy in the Australian Curriculum: English (AC:E). Both teachers and learners come to these tasks with varying degrees of techno-literacy, skills and access to technologies. This paper reports on case-study research following a technology based collaborative professional development (PD) program between a university Lecturer facilitator and English Teachers in a secondary Catholic school. The study found that the possibilities for creative and critical engagement are rich, but there are real grounded constraints such as lack of time, impeding teachers’ ability to master and teach new technologies in classrooms. Furthermore, pedagogical approaches are affected by technical skill levels and school infrastructure concerns which can militate against effective use of ICTs in school settings. The research project was funded by the Brisbane Catholic Education Office and focused on how teachers can be supported in these endeavours in educational contexts as they prepare students of English to be creative global citizens who use technology creatively.