184 resultados para Post-registration changes to medicines
Resumo:
Academic Skills and Scholarship for Nurses is a pilot programme which addresses academic aspiration and study preparedness of mature aged students. It is a series of four workshops designed and implemented by QUT Library staff in collaboration with Nursing and Midwifery academics, for pre- and post- registration nursing staff within the region of Caboolture, Redcliffe and Kilcoy. The programme extends QUT Library’s learning and study support expertise to the local community. The intended outcomes of the programme are fourfold. Firstly, encourage educational aspirations of mature age students, to establish realistic expectations and practical strategies for beginning tertiary study. Secondly, skills developed will be congruent with lifelong learning principles and continuing professional development requirements of professional nursing bodies. Thirdly, alignment with QUT strategies for widening participation in higher education and finally, strengthen existing relationships between academic and professional staff, and QUT and the local community for the benefit of all stakeholders.
Resumo:
Background: Medication-related problems often occur in the immediate post-discharge period. To reduce medication misadventure the Commonwealth Government funds home medicines reviews (HMRs). HMRs are initiated when general practitioners refer consenting patients to their community pharmacists, who then engage accredited pharmacists to review patients' medicines in their homes. Aim: To determine if hospital-initiated medication reviews (HIMRs) can be implemented in a more timely manner than HMRs; and to assess the impact of a bespoke referral form with comorbidity-specific questions on the quality of reports. Method: Eligible medical inpatients at risk of medication misadventure were referred by the hospital liaison pharmacist to participating accredited pharmacists post-discharge from hospital. Social, demographic and laboratory data were collected from medical records and during interviews with consenting patients. Issues raised in the HIMR reports were categorised: intervention/action, information given or recommendation, and assigned a rank of clinical significance. Results: HIMRs were conducted within 11.6 6.6 days postdischarge. 36 HIMR reports were evaluated and 1442 issues identified - information given (n = 1204), recommendations made (n = 88) and actions taken (n = 150). The majority of issues raised (89%) had a minor clinical impact. The bespoke referral form prompted approximately half of the issues raised. Conclusion: HIMRs can be facilitated in a more timely manner than post-discharge HMRs. There was an associated positive clinical impact of issues raised in the HIMR reports.
Resumo:
Exercise interventions during adjuvant cancer treatment have been shown to increase functional capacity, relieve fatigue and distress and in one recent study, assist chemotherapy completion. These studies have been limited to breast, prostate or mixed cancer groups and it is not yet known if a similar intervention is even feasible among women diagnosed with ovarian cancer. Women undergoing treatment for ovarian cancer commonly have extensive pelvic surgery followed by high intensity chemotherapy. It is hypothesized that women with ovarian cancer may benefit most from a customised exercise intervention during chemotherapy treatment. This could reduce the number and severity of chemotherapy-related side-effects and optimize treatment adherence. Hence, the aim of the research was to assess feasibility and acceptability of a walking intervention in women with ovarian cancer whilst undergoing chemotherapy, as well as pre-post intervention changes in a range of physical and psychological outcomes. Newly diagnosed women with ovarian cancer were recruited from the Royal Brisbane and Women’s Hospital (RBWH), to participate in a walking program throughout chemotherapy. The study used a one group pre- post-intervention test design. Baseline (conducted following surgery but prior to the first or second chemotherapy cycles) and follow-up (conducted three weeks after the last chemotherapy dose was received) assessments were performed. To accommodate changes in side-effects associated with treatment, specific weekly walking targets with respect to frequency, intensity and duration, were individualised for each participant. To assess feasibility, adherence and compliance with prescribed walking sessions, withdrawals and adverse events were recorded. Physical and psychological outcomes assessed included functional capacity, body composition, anxiety and depression, symptoms experienced during treatment and quality of life. Chemotherapy completion data was also documented and self-reported program helpfulness was assessed using a questionnaire post intervention. Forty-two women were invited to participate. Nine women were recruited, all of whom completed the program. There were no adverse events associated with participating in the intervention and all women reported that the walking program was helpful during their neo-adjuvant or adjuvant chemotherapy treatment. Adherence and compliance to the walking prescription was high. On average, women achieved at least two of their three individual weekly prescription targets 83% of the time (range 42% to 94%). Positive changes were found in functional capacity and quality of life, in addition to reductions in the number and intensity of treatment-associated symptoms over the course of the intervention period. Functional capacity increased for all nine women from baseline to follow-up assessment, with improvements ranging from 10% to 51%. Quality of life improvements were also noted, especially in the physical well-being scale (baseline: median 18; follow-up: median 23). Treatment symptoms reduced in presence and severity, specifically, in constipation, pain and fatigue, post intervention. These positive yet preliminary results suggest that a walking intervention for women receiving chemotherapy for ovarian cancer is safe, feasible and acceptable. Importantly, women perceived the program to be helpful and rewarding, despite being conducted during a time typically associated with elevated distress and treatment symptoms that are often severe enough to alter or cease chemotherapy prescription.
Resumo:
In response to concerns about the quality of English Language Learning (ELL) education at tertiary level, the Chinese Ministry of Education (CMoE) launched the College English Reform Program (CERP) in 2004. By means of a press release (CMoE, 2005) and a guideline document titled College English Curriculum Requirements (CECR) (CMoE, 2007), the CERP proposed two major changes to the College English assessment policy, which were: (1) the shift to optional status for the compulsory external test, the College English Test Band 4 (CET4); and (2) the incorporation of formative assessment into the existing summative assessment framework. This study investigated the interactions between the College English assessment policy change, the theoretical underpinnings, and the assessment practices within two Chinese universities (one Key University and one Non-Key University). It adopted a sociocultural theoretical perspective to examine the implementation process as experienced by local actors of institutional and classroom levels. Systematic data analysis using a constant comparative method (Merriam, 1998) revealed that contextual factors and implementation issues did not lead to significant differences in the two cases. Lack of training in assessment and the sociocultural factors such as the traditional emphasis on the product of learning and hierarchical teacher/students relationship are decisive and responsible for the limited effect of the reform.
Resumo:
Expected satiety has been shown to play a key role in decisions around meal size. Recently it has become clear that these expectations can also influence the satiety that is experienced after a food has been consumed. As such, increasing the expected and actual satiety a food product confers without increasing its caloric content is of importance. In this study we sought to determine whether this could be achieved via product labelling. Female participants (N=75) were given a 223-kcal yoghurt smoothie for lunch. In separate conditions the smoothie was labelled as a diet brand, a highly-satiating brand, or an ‘own brand’ control. Expected satiety was assessed using rating scales and a computer-based ‘method of adjustment’, both prior to consuming the smoothie and 24 hours later. Hunger and fullness were assessed at baseline, immediately after consuming the smoothie, and for a further three hours. Despite the fact that all participants consumed the same food, the smoothie branded as highly-satiating was consistently expected to deliver more satiety than the other ‘brands’; this difference was sustained 24 hours after consumption. Furthermore, post-consumption and over three hours, participants consuming this smoothie reported significantly less hunger and significantly greater fullness. These findings demonstrate that the satiety that a product confers depends in part on information that is present around the time of consumption. We suspect that this process is mediated by changes to expected satiety. These effects may potentially be utilised in the development of successful weight-management products.
Is the public sector ready to collaborate? Human resource implications of collaborative Arrangements
Resumo:
Relational governance arrangements across agencies and sectors have become prevalent as a means for government to become more responsive and effective in addressing complex, large scale or ‘wicked’ problems. The primary characteristic of such ‘collaborative’ arrangements is the utilisation of the joint capacities of multiple organisations to achieve collaborative advantage, which Huxham (1993) defines as the attainment of creative outcomes that are beyond the ability of single agencies to achieve. Attaining collaborative advantage requires organisations to develop collaborative capabilities that prepare organisations for collaborative practice (Huxham, 1993b). Further, collaborations require considerable investment of staff effort that could potentially be used beneficially elsewhere by both the government and non-government organisations involved in collaboration (Keast and Mandell, 2010). Collaborative arrangements to deliver services therefore requires a reconsideration of the way in which resources, including human resources, are conceptualised and deployed as well as changes to both the structure of public service agencies and the systems and processes by which they operate (Keast, forthcoming). A main aim of academic research and theorising has been to explore and define the requisite characteristics to achieve collaborative advantage. Such research has tended to focus on definitional, structural (Turrini, Cristofoli, Frosini, & Nasi, 2009) and organisational (Huxham, 1993) aspects and less on the roles government plays within cross-organisational or cross-sectoral arrangements. Ferlie and Steane (2002) note that there has been a general trend towards management led reforms of public agencies including the HRM practices utilised. Such trends have been significantly influenced by New Public Management (NPM) ideology with limited consideration to the implications for HRM practice in collaborative, rather than market contexts. Utilising case study data of a suite of collaborative efforts in Queensland, Australia, collected over a decade, this paper presents an examination of the network roles government agencies undertake. Implications for HRM in public sector agencies working within networked arrangements are drawn and implications for job design, recruitment, deployment and staff development are presented. The paper also makes theoretical advances in our understanding of Strategic Human Resource Management (SHRM) in network settings. While networks form part of the strategic armoury of government, networks operate to achieve collaborative advantage. SHRM with its focus on competitive advantage is argued to be appropriate in market situations, however is not an ideal conceptualisation in network situations. Commencing with an overview of literature on networks and network effectiveness, the paper presents the case studies and methodology; provides findings from the case studies in regard to the roles of government to achieve collaborative advantage and implications for HRM practice are presented. Implications for SHRM are considered.
Resumo:
Purpose – The purpose of this paper is to jointly assess the impact of regulatory reform for corporate fundraising in Australia (CLERP Act 1999) and the relaxation of ASX admission rules in 1999, on the accuracy of management earnings forecasts in initial public offer (IPO) prospectuses. The relaxation of ASX listing rules permitted a new category of new economy firms (commitments test entities (CTEs))to list without a prior history of profitability, while the CLERP Act (introduced in 2000) was accompanied by tighter disclosure obligations and stronger enforcement action by the corporate regulator (ASIC). Design/methodology/approach – All IPO earnings forecasts in prospectuses lodged between 1998 and 2003 are examined to assess the pre- and post-CLERP Act impact. Based on active ASIC enforcement action in the post-reform period, IPO firms are hypothesised to provide more accurate forecasts, particularly CTE firms, which are less likely to have a reasonable basis for forecasting. Research models are developed to empirically test the impact of the reforms on CTE and non-CTE IPO firms. Findings – The new regulatory environment has had a positive impact on management forecasting behaviour. In the post-CLERP Act period, the accuracy of prospectus forecasts and their revisions significantly improved and, as expected, the results are primarily driven by CTE firms. However, the majority of prospectus forecasts continue to be materially inaccurate. Originality/value – The results highlight the need to control for both the changing nature of listed firms and the level of enforcement action when examining responses to regulatory changes to corporate fundraising activities.
Resumo:
Historically, determining the country of origin of a published work presented few challenges, because works were generally published physically – whether in print or otherwise – in a distinct location or few locations. However, publishing opportunities presented by new technologies mean that we now live in a world of simultaneous publication – works that are first published online are published simultaneously to every country in world in which there is Internet connectivity. While this is certainly advantageous for the dissemination and impact of information and creative works, it creates potential complications under the Berne Convention for the Protection of Literary and Artistic Works (“Berne Convention”), an international intellectual property agreement to which most countries in the world now subscribe. Under the Berne Convention’s national treatment provisions, rights accorded to foreign copyright works may not be subject to any formality, such as registration requirements (although member countries are free to impose formalities in relation to domestic copyright works). In Kernel Records Oy v. Timothy Mosley p/k/a Timbaland, et al. however, the Florida Southern District Court of the United States ruled that first publication of a work on the Internet via an Australian website constituted “simultaneous publication all over the world,” and therefore rendered the work a “United States work” under the definition in section 101 of the U.S. Copyright Act, subjecting the work to registration formality under section 411. This ruling is in sharp contrast with an earlier decision delivered by the Delaware District Court in Håkan Moberg v. 33T LLC, et al. which arrived at an opposite conclusion. The conflicting rulings of the U.S. courts reveal the problems posed by new forms of publishing online and demonstrate a compelling need for further harmonization between the Berne Convention, domestic laws and the practical realities of digital publishing. In this article, we argue that even if a work first published online can be considered to be simultaneously published all over the world it does not follow that any country can assert itself as the “country of origin” of the work for the purpose of imposing domestic copyright formalities. More specifically, we argue that the meaning of “United States work” under the U.S. Copyright Act should be interpreted in line with the presumption against extraterritorial application of domestic law to limit its application to only those works with a real and substantial connection to the United States. There are gaps in the Berne Convention’s articulation of “country of origin” which provide scope for judicial interpretation, at a national level, of the most pragmatic way forward in reconciling the goals of the Berne Convention with the practical requirements of domestic law. We believe that the uncertainties arising under the Berne Convention created by new forms of online publishing can be resolved at a national level by the sensible application of principles of statutory interpretation by the courts. While at the international level we may need a clearer consensus on what amounts to “simultaneous publication” in the digital age, state practice may mean that we do not yet need to explore textual changes to the Berne Convention.
Resumo:
Acknowledgement that many children in Australia travel in restraints that do not offer them the best protection has led to recent changes in legislation such that the type of restraint for children under 7 years is now specified. This paper reports the results of two studies (observational; focus group/ survey) carried out in the state of Queensland to evaluate the effectiveness of these changes to the legislation. Observations suggested that almost all of the children estimated as aged 0-12 years were restrained (95%). Analysis of the type of restraint used for target-aged children (0-6 year olds) suggests that the proportion using an age-appropriate restraint has increased by an estimated 7% since enactment of the legislation. However, around 1 in 4 children estimated as aged under 7 years were using restraints too large for good fit. Results from the survey and focus group suggested parents were supportive of the changes in legislation. Non-Indigenous parents agreed that the changes had been necessary, were effective at getting children into the right restraints, were easy to understand as well as making it clear what restraint to use with children. Moreover, they did not see the legislation as too complicated or too hard to comply with. Indigenous parents who participated in a focus group also regarded the legislation as improving children’s safety. However, they identified the cost of restraints as an important barrier to compliance. In summary, the legislation appears to have had a positive effect on compliance levels and on raising parental awareness of the need to restrain children child-specific restraints for longer. However, it would seem that an important minority of parents transition their children into larger restraints too early for optimal protection. Intervention efforts should aim to better inform these parents about appropriate ages for transition, especially from forward facing childseats. This could potentially be through use of other important transitions that occur at the same age, such as starting school. The small proportion of parents who do not restrain their children at all are also an important community sector to target. Finally, obtaining restraints presents a significant barrier to compliance for parents on limited incomes and interventions are needed to address this.
Resumo:
It is frequently reported that the actual weight loss achieved through exercise interventions is less than theoretically expected. Amongst other compensatory adjustments that accompany exercise training (e.g., increases in resting metabolic rate and energy intake), a possible cause of the less than expected weight loss is a failure to produce a marked increase in total daily energy expenditure due to a compensatory reduction in non-exercise activity thermogenesis (NEAT). Therefore, there is a need to understand how behaviour is modified in response to exercise interventions. The proposed benefits of exercise training are numerous, including changes to fat oxidation. Given that a diminished capacity to oxidise fat could be a factor in the aetiology of obesity, an exercise training intensity that optimises fat oxidation in overweight/obese individuals would improve impaired fat oxidation, and potentially reduce health risks that are associated with obesity. To improve our understanding of the effectiveness of exercise for weight management, it is important to ensure exercise intensity is appropriately prescribed, and to identify and monitor potential compensatory behavioural changes consequent to exercise training. In line with the gaps in the literature, three studies were performed. The aim of Study 1 was to determine the effect of acute bouts of moderate- and high-intensity walking exercise on NEAT in overweight and obese men. Sixteen participants performed a single bout of either moderate-intensity walking exercise (MIE) or high-intensity walking exercise (HIE) on two separate occasions. The MIE consisted of walking for 60-min on a motorised treadmill at 6 km.h-1. The 60-min HIE session consisted of walking in 5-min intervals at 6 km.h-1 and 10% grade followed by 5-min at 0% grade. NEAT was assessed by accelerometer three days before, on the day of, and three days after the exercise sessions. There was no significant difference in NEAT vector magnitude (counts.min-1) between the pre-exercise period (days 1-3) and the exercise day (day 4) for either protocol. In addition, there was no change in NEAT during the three days following the MIE session, however NEAT increased by 16% on day 7 (post-exercise) compared with the exercise day (P = 0.32). During the post-exercise period following the HIE session, NEAT was increased by 25% on day 7 compared with the exercise day (P = 0.08), and by 30-33% compared with the pre-exercise period (day 1, day 2 and day 3); P = 0.03, 0.03, 0.02, respectively. To conclude, a single bout of either MIE or HIE did not alter NEAT on the exercise day or on the first two days following the exercise session. However, extending the monitoring of NEAT allowed the detection of a 48 hour delay in increased NEAT after performing HIE. A longer-term intervention is needed to determine the effect of accumulated exercise sessions over a week on NEAT. In Study 2, there were two primary aims. The first aim was to test the reliability of a discontinuous incremental exercise protocol (DISCON-FATmax) to identify the workload at which fat oxidation is maximised (FATmax). Ten overweight and obese sedentary male men (mean BMI of 29.5 ¡Ó 4.5 kg/m2 and mean age of 28.0 ¡Ó 5.3 y) participated in this study and performed two identical DISCON-FATmax tests one week apart. Each test consisted of alternate 4-min exercise and 2-min rest intervals on a cycle ergometer. The starting work load of 28 W was increased every 4-min using 14 W increments followed by 2-min rest intervals. When the respiratory exchange ratio was consistently >1.0, the workload was increased by 14 W every 2-min until volitional exhaustion. Fat oxidation was measured by indirect calorimetry. The mean FATmax, ƒtV O2peak, %ƒtV O2peak and %Wmax at which FATmax occurred during the two tests were 0.23 ¡Ó 0.09 and 0.18 ¡Ó 0.08 (g.min-1); 29.7 ¡Ó 7.8 and 28.3 ¡Ó 7.5 (ml.kg-1.min-1); 42.3 ¡Ó 7.2 and 42.6 ¡Ó 10.2 (%ƒtV O2max) and 36.4 ¡Ó 8.5 and 35.4 ¡Ó 10.9 (%), respectively. A paired-samples T-test revealed a significant difference in FATmax (g.min-1) between the tests (t = 2.65, P = 0.03). The mean difference in FATmax was 0.05 (g.min-1) with the 95% confidence interval ranging from 0.01 to 0.18. Paired-samples T-test, however, revealed no significant difference in the workloads (i.e. W) between the tests, t (9) = 0.70, P = 0.4. The intra-class correlation coefficient for FATmax (g.min-1) between the tests was 0.84 (95% confidence interval: 0.36-0.96, P < 0.01). However, Bland-Altman analysis revealed a large disagreement in FATmax (g.min-1) related to W between the two tests; 11 ¡Ó 14 (W) (4.1 ¡Ó 5.3 ƒtV O2peak (%)).These data demonstrate two important phenomena associated with exercise-induced substrate oxidation; firstly, that maximal fat oxidation derived from a discontinuous FATmax protocol differed statistically between repeated tests, and secondly, there was large variability in the workload corresponding with FATmax. The second aim of Study 2 was to test the validity of a DISCON-FATmax protocol by comparing maximal fat oxidation (g.min-1) determined by DISCON-FATmax with fat oxidation (g.min-1) during a continuous exercise protocol using a constant load (CONEX). Ten overweight and obese sedentary males (BMI = 29.5 ¡Ó 4.5 kg/m2; age = 28.0 ¡Ó 4.5 y) with a ƒtV O2max of 29.1 ¡Ó 7.5 ml.kg-1.min-1 performed a DISCON-FATmax test consisting of alternate 4-min exercise and 2-min rest intervals on a cycle ergometer. The 1-h CONEX protocol used the workload from the DISCON-FATmax to determine FATmax. The mean FATmax, ƒtV O2max, %ƒtV O2max and workload at which FATmax occurred during the DISCON-FATmax were 0.23 ¡Ó 0.09 (g.min-1); 29.1 ¡Ó 7.5 (ml.kg-1.min-1); 43.8 ¡Ó 7.3 (%ƒtV O2max) and 58.8 ¡Ó 19.6 (W), respectively. The mean fat oxidation during the 1-h CONEX protocol was 0.19 ¡Ó 0.07 (g.min-1). A paired-samples T-test revealed no significant difference in fat oxidation (g.min-1) between DISCON-FATmax and CONEX, t (9) = 1.85, P = 0.097 (two-tailed). There was also no significant correlation in fat oxidation between the DISCON-FATmax and CONEX (R=0.51, P = 0.14). Bland- Altman analysis revealed a large disagreement in fat oxidation between the DISCONFATmax and CONEX; the upper limit of agreement was 0.13 (g.min-1) and the lower limit of agreement was ¡V0.03 (g.min-1). These data suggest that the CONEX and DISCONFATmax protocols did not elicit different rates of fat oxidation (g.min-1). However, the individual variability in fat oxidation was large, particularly in the DISCON-FATmax test. Further research is needed to ascertain the validity of graded exercise tests for predicting fat oxidation during constant load exercise sessions. The aim of Study 3 was to compare the impact of two different intensities of four weeks of exercise training on fat oxidation, NEAT, and appetite in overweight and obese men. Using a cross-over design 11 participants (BMI = 29 ¡Ó 4 kg/m2; age = 27 ¡Ó 4 y) participated in a training study and were randomly assigned initially to: [1] a lowintensity (45%ƒtV O2max) exercise (LIT) or [2] a high-intensity interval (alternate 30 s at 90%ƒtV O2max followed by 30 s rest) exercise (HIIT) 40-min duration, three times a week. Participants completed four weeks of supervised training and between cross-over had a two week washout period. At baseline and the end of each exercise intervention,ƒtV O2max, fat oxidation, and NEAT were measured. Fat oxidation was determined during a standard 30-min continuous exercise bout at 45%ƒtV O2max. During the steady state exercise expired gases were measured intermittently for 5-min periods and HR was monitored continuously. In each training period, NEAT was measured for seven consecutive days using an accelerometer (RT3) the week before, at week 3 and the week after training. Subjective appetite sensations and food preferences were measured immediately before and after the first exercise session every week for four weeks during both LIT and HIIT. The mean fat oxidation rate during the standard continuous exercise bout at baseline for both LIT and HIIT was 0.14 ¡Ó 0.08 (g.min-1). After four weeks of exercise training, the mean fat oxidation was 0.178 ¡Ó 0.04 and 0.183 ¡Ó 0.04 g.min-1 for LIT and HIIT, respectively. The mean NEAT (counts.min-1) was 45 ¡Ó 18 at baseline, 55 ¡Ó 22 and 44 ¡Ó 16 during training, and 51 ¡Ó 14 and 50 ¡Ó 21 after training for LIT and HIIT, respectively. There was no significant difference in fat oxidation between LIT and HIIT. Moreover, although not statistically significant, there was some evidence to suggest that LIT and HIIT tend to increase fat oxidation during exercise at 45% ƒtV O2max (P = 0.14 and 0.08, respectively). The order of training treatment did not significantly influence changes in fat oxidation, NEAT, and appetite. NEAT (counts.min-1) was not significantly different in the week following training for either LIT or HIIT. Although not statistically significant (P = 0.08), NEAT was 20% lower during week 3 of exercise training in HIIT compared with LIT. Examination of appetite sensations revealed differences in the intensity of hunger, with higher ratings after LIT compared with HIIT. No differences were found in preferences for high-fat sweet foods between LIT and HIIT. In conclusion, the results of this thesis suggest that while fat oxidation during steady state exercise was not affected by the level of exercise intensity, there is strong evidence to suggest that intense exercise could have a debilitative effect on NEAT.
Resumo:
Lymphoedema is a chronic condition predominantly affecting the limbs, although it can involve the trunk and other areas of the body. It is characterised by swelling due to excess accumulation of fluid in body tissues. Secondary lymphoedema, which arises following cancer treatment, is the more common form of lymphoedema in developed countries. At least 20% of those diagnosed with the most common cancers will develop lymphoedema. This is a concern in Australia as incidence of these cancers is increasing. Cancer survival rates are also increasing. Currently, this equates to 9 300 new cases of secondary lymphoedema diagnosed each year. Considerable physical and psychosocial impacts of lymphoedema have been reported and its subsequent impact on health-related quality of life can exacerbate other side effects of cancer treatment. Exercise following cancer treatment has been shown to significantly reduce the impact of treatment side effects, improve quality of life and physical status. While participating in exercise does not increase risk nor exacerbate existing lymphoedema, reductions in incidence of lymphoedema exacerbations and associated symptoms have been observed in women participating in regular weight lifting following breast cancer treatment. Despite these benefits, lymphoedema prevention and management advice cautions people with lymphoedema against „repetitive use. or „overuse. of their affected arm. It is possible that this advice creates a barrier to participation in physical activity; however, little is known about the relationship between physical activity and lymphoedema. In addition, the majority of studies examining the experiences of people living with lymphoedema and the impact of the condition have been predominantly conducted internationally and have focused on women following breast cancer. This study sought to explore firstly, how men and women construct their experience of living with lymphoedema following treatment for a range of cancers in the context of everyday life in Australia; and secondly, to analyse the role of physical activity in the lives of those living with lymphoedema following cancer treatment. A social constructivist grounded theory approach was taken to explore these objectives as it is acknowledged that human actions and the meanings associated with these actions are influenced by the interaction between the self and the social world. It is also acknowledged that the research process itself is a social construction between the researcher and participant. Purposive sampling techniques were used to recruit a total of 29 participants from a variety of sources. Telephone interviews and focus groups were conducted to collect data. Data were concurrently collected and analysed and analysis was conducted using the constant comparative method. The core category that developed in objective one was „sense of self‟. The self was defined by perceptions participants held of themselves and their identity prior to a lymphoedema diagnosis and changes to their perceptions and identity since diagnosis. Three conceptual categories which related to each other and to „sense of self‟ were developed through the process of coding that represented the process of how participants constructed their experiences living with secondary lymphoedema in the context of everyday life. Firstly, altered normalcy reflected the physical and psychosocial changes experienced and the effect it had on their lives. Secondly, „accidental journey‟ reflected participants‟ journey with the heath care system prior to diagnosis through to longer term management. Thirdly, renegotiating control revealed participants perceived control over lymphoedema and their ability to participate in daily activities previously enjoyed. These findings revealed the failure of the broader health system to recognise the significant and chronic nature of a lymphoedema diagnosis following cancer treatment with greater understanding, knowledge and support from health professionals being needed. The findings also reveal access to health professionals trained in lymphoedema management, a comprehensive approach encompassing both physical and psychosocial needs and provision of practical and meaningful guidelines supported by scientific evidence would contribute to improved treatment and management of the condition. The key findings for objective two were that people with lymphoedema define physical activity in different ways. Physical activity post-diagnosis was perceived as important by most for a variety of reasons ranging from everyday functioning, to physical and psychosocial health benefits. Issues relating to the impact of lymphoedema on physical activity related to the impact on peoples‟ ability to be physically active, confusion about acceptable forms of physical activity and barriers that lymphoedema presented to being physically active. A relationship between how people construct their experiences with lymphoedema and the role of physical activity was also established. The contribution of physical activity to the lives of people living with lymphoedema following cancer treatment appeared to be influenced by their sense of self as socially constructed through their experiences prior to diagnosis and following diagnosis with lymphoedema. The influence of pre-lymphoedema habits, norms and beliefs suggests the importance of effective health promotion messages to encourage physical activity among the general population and specific messages and guidelines particular to the needs of those diagnosed with lymphoedema following cancer treatment. The influence of participant.s social constructions on the lymphoedema experience highlights the importance of improving interactions between the overall health care system and patients, providing a clear treatment plan, providing evidence-based and clear advice about participation in appropriate physical activity, which in doing so will limit the physical and psychosocial effect of lymphoedema and providing comprehensive physical and psychosocial support to those living with the condition and their families. This study has contributed to a deep understanding of people.s experiences with lymphoedema following cancer treatment and the role of physical activity in the context of daily life in Australia. Findings from this study lead to recommendations for advocacy, a comprehensive approach to diagnosis, treatment and management, and specific areas for future research.
Creativity in policing: building the necessary skills to solve complex and protracted investigations
Resumo:
Despite an increased focus on proactive policing in recent years, criminal investigation is still perhaps the most important task of any law enforcement agency. As a result, the skills required to carry out a successful investigation or to be an ‘effective detective’ have been subjected to much attention and debate (Smith and Flanagan, 2000; Dean, 2000; Fahsing and Gottschalk, 2008:652). Stelfox (2008:303) states that “The service’s capacity to carry out investigations comprises almost entirely the expertise of investigators.” In this respect, Dean (2000) highlighted the need to profile criminal investigators in order to promote further understanding of the cognitive approaches they take to the process of criminal investigation. As a result of his research, Dean (2000) produced a theoretical framework of criminal investigation, which included four disparate cognitive or ‘thinking styles’. These styles were the ‘Method’, ‘Challenge’, ‘Skill’ and ‘Risk’. While the Method and Challenge styles deal with adherence to Standard Operating Procedures (SOPs) and the internal ‘drive’ that keeps an investigator going, the Skill and Risk styles both tap on the concept of creativity in policing. It is these two latter styles that provide the focus for this paper. This paper presents a brief discussion on Dean’s (2000) Skill and Risk styles before giving an overview of the broader literature on creativity in policing. The potential benefits of a creative approach as well as some hurdles which need to be overcome when proposing the integration of creativity within the policing sector are then discussed. Finally, the paper concludes by proposing further research into Dean’s (2000) skill and risk styles and also by stressing the need for significant changes to the structure and approach of the traditional policing organisation before creativity in policing is given the status it deserves.
Resumo:
It would be a rare thing to visit an early years setting or classroom in Australia that does not display examples of young children’s artworks. This practice serves to give schools a particular ‘look’, but is no guarantee of quality art education. The Australian National Review of Visual Arts Education (NRVE) (2009) has called for changes to visual art education in schools. The planned new National Curriculum includes the arts (music, dance, drama, media and visual arts) as one of the five learning areas. Research shows that it is the classroom teacher that makes the difference, and teacher education has a large part to play in reforms to art education. This paper provides an account of one foundation unit of study (Unit 1) for first year university students enrolled in a 4-year Bachelor degree program who are preparing to teach in the early years (0–8 years). To prepare pre-service teachers to meet the needs of children in the 21st century, Unit 1 blends old and new ways of seeing art, child and pedagogy. Claims for the effectiveness of this model are supported with evidence-based research, conducted over the six years of iterations and ongoing development of Unit 1.
Resumo:
In the cancer research field, most in vitro studies still rely on two-dimensional (2D) cultures. However, the trend is rapidly shifting towards using a three-dimensional (3D) culture system. This is because 3D models better recapitulate the microenvironment of cells, and therefore, yield cellular and molecular responses that more accurately describe the pathophysiology of cancer. By adopting technology platforms established by the tissue engineering discipline, it is now possible to grow cancer cells in extracellular matrix (ECM)-like environments and dictate the biophysical and biochemical properties of the matrix. In addition, 3D models can be modified to recapitulate different stages of cancer progression for instance from the initial development of tumor to metastasis. Inevitably, to recapitulate a heterotypic condition, comprising more than one cell type, it requires a more complex 3D model. To date, 3D models that are available for studying the prostate cancer (CaP)-bone interactions are still lacking. Therefore, the aim of this study is to establish a co-culture model that allows investigation of direct and indirect CaP-bone interactions. Prior to that, 3D polyethylene glycol (PEG)-based hydrogel cultures for CaP cells were first developed and growth conditions were optimised. Characterization of the 3D hydrogel cultures show that LNCaP cells form a multicellular mass that resembles avascular tumor. In comparison to 2D cultures, besides the difference in cell morphology, the response of LNCaP cells to the androgen analogue (R1881) stimulation is different compared to the cells in 2D cultures. This discrepancy between 2D and 3D cultures is likely associated with the cell-cell contact, density and ligand-receptor interactions. Following the 3D monoculture study, a 3D direct co-culture model of CaP cells and the human tissue engineered bone (hTEBC) construct was developed. Interactions between the CaP cells and human osteoblasts (hOBs) resulted in elevation of Matrix Metalloproteinase 9 (MMP9) for PC-3 cells and Prostate Specific Antigen (PSA) for LNCaP cells. To further investigate the paracrine interaction of CaP cells and (hOBs), a 3D indirect co-culture model was developed, where LNCaP cells embedded within PEG hydrogels were co-cultured with hTEBC. It was found that the cellular changes observed reflect the early event of CaP colonizing the bone site. In the absence of androgens, interestingly, up-regulation of PSA and other kallikreins is also detected in the co-culture compared to the LNCaP monoculture. This non androgenic stimulation could be triggered by the soluble factors secreted by the hOB such as Interleukin-6. There are also decrease in alkaline phosphatase (ALP) activity and down-regulation of genes of the hOB when co-cultured with LNCaP cells that have not been previously described. These genes include transforming growth factor β1 (TGFβ1), osteocalcin and Vimentin. However, no changes to epithelial markers (e.g E-cadherin, Cytokeratin 8) were observed in both cell types from the co-culture. Some of these intriguing changes observed in the co-cultures that had not been previously described have enriched the basic knowledge of the CaP cell-bone interaction. From this study, we have shown evidence of the feasibility and versatility of our established 3D models. These models can be adapted to test various hypotheses for studies pertaining to underlying mechanisms of bone metastasis and could provide a vehicle for anticancer drug screening purposes in the future.
Resumo:
Newly licensed drivers on a provisional or intermediate licence have the highest crash risk when compared with any other group of drivers. In comparison, learner drivers have the lowest crash risk. Graduated driver licensing is one countermeasure that has been demonstrated to effectively reduce the crashes of novice drivers. This thesis examined the graduated driver licensing systems in two Australian states in order to better understand the behaviour of learner drivers, provisional drivers and the supervisors of learner drivers. By doing this, the thesis investigated the personal, social and environmental influences on novice driver behaviour as well as providing effective baseline data against which to measure subsequent changes to the licensing systems. In the first study, conducted prior to the changes to the graduated driver licensing system introduced in mid-2007, drivers who had recently obtained their provisional licence in Queensland and New South Wales were interviewed by telephone regarding their experiences while driving on their learner licence. Of the 687 eligible people approached to participate at driver licensing centres, 392 completed the study representing a response rate of 57.1 per cent. At the time the data was collected, New South Wales represented a more extensive graduated driver licensing system when compared with Queensland. The results suggested that requiring learners to complete a mandated number of hours of supervised practice impacts on the amount of hours that learners report completing. While most learners from New South Wales reported meeting the requirement to complete 50 hours of practice, it appears that many stopped practising soon after this goal was achieved. In contrast, learners from Queensland, who were not required to complete a specific number of hours at the time of the survey, tended to fall into three groups. The first group appeared to complete the minimum number of hours required to pass the test (less than 26 hours), the second group completed 26 to 50 hours of supervised practice while the third group completed significantly more practice than the first two groups (over 100 hours of supervised practice). Learner drivers in both states reported generally complying with the road laws and were unlikely to report that they had been caught breaking the road rules. They also indicated that they planned to obey the road laws once they obtained their provisional licence. However, they were less likely to intend to comply with recommended actions to reduce crash risk such as limiting their driving at night. This study also identified that there were relatively low levels of unaccompanied driving (approximately 15 per cent of the sample), very few driving offences committed (five per cent of the sample) and that learner drivers tended to use a mix of private and professional supervisors (although the majority of practice is undertaken with private supervisors). Consistent with the international literature, this study identified that very few learner drivers had experienced a crash (six per cent) while on their learner licence. The second study was also conducted prior to changes to the graduated driver licensing system and involved follow up interviews with the participants of the first study after they had approximately 21 months driving experience on their provisional licence. Of the 392 participants that completed the first study, 233 participants completed the second interview (representing a response rate of 59.4 per cent). As with the first study, at the time the data was collected, New South Wales had a more extensive graduated driver licensing system than Queensland. For instance, novice drivers from New South Wales were required to progress through two provisional licence phases (P1 and P2) while there was only one provisional licence phase in Queensland. Among the participants in this second study, almost all provisional drivers (97.9 per cent) owned or had access to a vehicle for regular driving. They reported that they were unlikely to break road rules, such as driving after a couple of drinks, but were also unlikely to comply with recommended actions, such as limiting their driving at night. When their provisional driving behaviour was compared to the stated intentions from the first study, the results suggested that their intentions were not a strong predictor of their subsequent behaviour. Their perception of risk associated with driving declined from when they first obtained their learner licence to when they had acquired provisional driving experience. Just over 25 per cent of participants in study two reported that they had been caught committing driving offences while on their provisional licence. Nearly one-third of participants had crashed while driving on a provisional licence, although few of these crashes resulted in injuries or hospitalisations. To complement the first two studies, the third study examined the experiences of supervisors of learner drivers, as well as their perceptions of their learner’s experiences. This study was undertaken after the introduction of the new graduated driver licensing systems in Queensland and New South Wales in mid- 2007, providing insights into the impacts of these changes from the perspective of supervisors. The third study involved an internet survey of 552 supervisors of learner drivers. Within the sample, approximately 50 per cent of participants supervised their own child. Other supervisors of the learner drivers included other parents or stepparents, professional driving instructors and siblings. For two-thirds of the sample, this was the first learner driver that they had supervised. Participants had provided an average of 54.82 hours (sd = 67.19) of supervision. Seventy-three per cent of participants indicated that their learners’ logbooks were accurate or very accurate in most cases, although parents were more likely than non-parents to report that their learners’ logbook was accurate (F (1,546) = 7.74, p = .006). There was no difference between parents and non-parents regarding whether they believed the log book system was effective (F (1,546) = .01, p = .913). The majority of the sample reported that their learner driver had had some professional driving lessons. Notwithstanding this, a significant proportion (72.5 per cent) believed that parents should be either very involved or involved in teaching their child to drive, with parents being more likely than non-parents to hold this belief. In the post mid-2007 graduated driver licensing system, Queensland learner drivers are able to record three hours of supervised practice in their log book for every hour that is completed with a professional driving instructor, up to a total of ten hours. Despite this, there was no difference identified between Queensland and New South Wales participants regarding the amount of time that they reported their learners spent with professional driving instructors (X2(1) = 2.56, p = .110). Supervisors from New South Wales were more likely to ensure that their learner driver complied with the road laws. Additionally, with the exception of drug driving laws, New South Wales supervisors believed it was more important to teach safety-related behaviours such as remaining within the speed limit, car control and hazard perception than those from Queensland. This may be indicative of more intensive road safety educational efforts in New South Wales or the longer time that graduated driver licensing has operated in that jurisdiction. However, other factors may have contributed to these findings and further research is required to explore the issue. In addition, supervisors reported that their learner driver was involved in very few crashes (3.4 per cent) and offences (2.7 per cent). This relatively low reported crash rate is similar to that identified in the first study. Most of the graduated driver licensing research to date has been applied in nature and lacked a strong theoretical foundation. These studies used Akers’ social learning theory to explore the self-reported behaviour of novice drivers and their supervisors. This theory was selected as it has previously been found to provide a relatively comprehensive framework for explaining a range of driver behaviours including novice driver behaviour. Sensation seeking was also used in the first two studies to complement the non-social rewards component of Akers’ social learning theory. This program of research identified that both Akers’ social learning theory and sensation seeking were useful in predicting the behaviour of learner and provisional drivers over and above socio-demographic factors. Within the first study, Akers’ social learning theory accounted for an additional 22 per cent of the variance in learner driver compliance with the law, over and above a range of socio-demographic factors such as age, gender and income. The two constructs within Akers’ theory which were significant predictors of learner driver compliance were the behavioural dimension of differential association relating to friends, and anticipated rewards. Sensation seeking predicted an additional six per cent of the variance in learner driver compliance with the law. When considering a learner driver’s intention to comply with the law while driving on a provisional licence, Akers’ social learning theory accounted for an additional 10 per cent of the variance above socio-demographic factors with anticipated rewards being a significant predictor. Sensation seeking predicted an additional four per cent of the variance. The results suggest that the more rewards individuals anticipate for complying with the law, the more likely they are to obey the road rules. Further research is needed to identify which specific rewards are most likely to encourage novice drivers’ compliance with the law. In the second study, Akers’ social learning theory predicted an additional 40 per cent of the variance in self-reported compliance with road rules over and above socio-demographic factors while sensation seeking accounted for an additional five per cent of the variance. A number of Aker’s social learning theory constructs significantly predicted provisional driver compliance with the law, including the behavioural dimension of differential association for friends, the normative dimension of differential association, personal attitudes and anticipated punishments. The consistent prediction of additional variance by sensation seeking over and above the variables within Akers’ social learning theory in both studies one and two suggests that sensation seeking is not fully captured within the non social rewards dimension of Akers’ social learning theory, at least for novice drivers. It appears that novice drivers are strongly influenced by the desire to engage in new and intense experiences. While socio-demographic factors and the perception of risk associated with driving had an important role in predicting the behaviour of the supervisors of learner drivers, Akers’ social learning theory provided further levels of prediction over and above these factors. The Akers’ social learning theory variables predicted an additional 14 per cent of the variance in the extent to which supervisors ensured that their learners complied with the law and an additional eight per cent of the variance in the supervisors’ provision of a range of practice experiences. The normative dimension of differential association, personal attitudes towards the use of professional driving instructors and anticipated rewards were significant predictors for supervisors ensuring that their learner complied with the road laws, while the normative dimension was important for range of practice. This suggests that supervisors who engage with other supervisors who ensure their learner complies with the road laws and provide a range of practice to their own learners are more likely to also engage in these behaviours. Within this program of research, there were several limitations including the method of recruitment of participants within the first study, the lower participation rate in the second study, an inability to calculate a response rate for study three and the use of self-report data for all three studies. Within the first study, participants were only recruited from larger driver licensing centres to ensure that there was a sufficient throughput of drivers to approach. This may have biased the results due to the possible differences in learners that obtain their licences in locations with smaller licensing centres. Only 59.4 per cent of the sample in the first study completed the second study. This may be a limitation if there was a common reason why those not participating were unable to complete the interview leading to a systematic impact on the results. The third study used a combination of a convenience and snowball sampling which meant that it was not possible to calculate a response rate. All three studies used self-report data which, in many cases, is considered a limitation. However, self-report data may be the only method that can be used to obtain some information. This program of research has a number of implications for countermeasures in both the learner licence phase and the provisional licence phase. During the learner phase, licensing authorities need to carefully consider the number of hours that they mandate learner drivers must complete before they obtain their provisional driving licence. If they mandate an insufficient number of hours, there may be inadvertent negative effects as a result of setting too low a limit. This research suggests that logbooks may be a useful tool for learners and their supervisors in recording and structuring their supervised practice. However, it would appear that the usage rates for logbooks will remain low if they remain voluntary. One strategy for achieving larger amounts of supervised practice is for learner drivers and their supervisors to make supervised practice part of their everyday activities. As well as assisting the learner driver to accumulate the required number of hours of supervised practice, it would ensure that they gain experience in the types of environments that they will probably encounter when driving unaccompanied in the future, such as to and from education or work commitments. There is also a need for policy processes to ensure that parents and professional driving instructors communicate effectively regarding the learner driver’s progress. This is required as most learners spend at least some time with a professional instructor despite receiving significant amounts of practice with a private supervisor. However, many supervisors did not discuss their learner’s progress with the driving instructor. During the provisional phase, there is a need to strengthen countermeasures to address the high crash risk of these drivers. Although many of these crashes are minor, most involve at least one other vehicle. Therefore, there are social and economic benefits to reducing these crashes. If the new, post-2007 graduated driver licensing systems do not significantly reduce crash risk, there may be a need to introduce further provisional licence restrictions such as separate night driving and peer passenger restrictions (as opposed to the hybrid version of these two restrictions operating in both Queensland and New South Wales). Provisional drivers appear to be more likely to obey some provisional licence laws, such as lower blood alcohol content limits, than others such as speed limits. Therefore, there may be a need to introduce countermeasures to encourage provisional drivers to comply with specific restrictions. When combined, these studies provided significant information regarding graduated driver licensing programs. This program of research has investigated graduated driver licensing utilising a cross-sectional and longitudinal design in order to develop our understanding of the experiences of novice drivers that progress through the system in order to help reduce crash risk once novice drivers commence driving by themselves.