184 resultados para Post-registration changes to medicines
Resumo:
Driving and using prescription medicines that have the potential to impair driving is an emerging research area. To date it is characterised by a limited (although growing) number of studies and methodological complexities that make generalisations about impairment due to medications difficult. Consistent evidence has been found for the impairing effects of hypnotics, sedative antidepressants and antihistamines, and narcotic analgesics, although it has been estimated that as many as nine medication classes have the potential to impair driving (Alvarez & del Rio, 2000; Walsh, de Gier, Christopherson, & Verstraete, 2004). There is also evidence for increased negative effects related to concomitant use of other medications and alcohol (Movig et al., 2004; Pringle, Ahern, Heller, Gold, & Brown, 2005). Statistics on the high levels of Australian prescription medication use suggest that consumer awareness of driving impairment due to medicines should be examined. One web-based study has found a low level of awareness, knowledge and risk perceptions among Australian drivers about the impairing effects of various medications on driving (Mallick, Johnston, Goren, & Kennedy, 2007). The lack of awareness and knowledge brings into question the effectiveness of the existing countermeasures. In Australia these consist of the use of ancillary warning labels administered under mandatory regulation and professional guidelines, advice to patients, and the use of Consumer Medicines Information (CMI) with medications that are known to cause impairment. The responsibility for the use of the warnings and related counsel to patients primarily lies with the pharmacist when dispensing relevant medication. A review by the Therapeutic Goods Administration (TGA) noted that in practice, advice to patients may not occur and that CMI is not always available (TGA, 2002). Researchers have also found that patients' recall of verbal counsel is very low (Houts, Bachrach, Witmer, Tringali, Bucher, & Localio, 1998). With healthcare observed as increasingly being provided in outpatient conditions (Davis et al., 2006; Vingilis & MacDonald, 2000), establishing the effectiveness of the warning labels as a countermeasure is especially important. There have been recent international developments in medication categorisation systems and associated medication warning labels. In 2005, France implemented a four-tier medication categorisation and warning system to improve patients' and health professionals' awareness and knowledge of related road safety issues (AFSSAPS, 2005). This warning system uses a pictogram and indicates the level of potential impairment in relation to driving performance through the use of colour and advice on the recommended behaviour to adopt towards driving. The comparable Australian system does not indicate the severity level of potential effects, and does not provide specific guidelines on the attitude or actions that the individual should adopt towards driving. It is reliant upon the patient to be vigilant in self-monitoring effects, to understand the potential ways in which they may be affected and how serious these effects may be, and to adopt the appropriate protective actions. This thesis investigates the responses of a sample of Australian hospital outpatients who receive appropriate labelling and counselling advice about potential driving impairment due to prescribed medicines. It aims to provide baseline data on the understanding and use of relevant medications by a Queensland public hospital outpatient sample recruited through the hospital pharmacy. It includes an exploration and comparison of the effect of the Australian and French medication warning systems on medication user knowledge, attitudes, beliefs and behaviour, and explores whether there are areas in which the Australian system may be improved by including any beneficial elements of the French system. A total of 358 outpatients were surveyed, and a follow-up telephone survey was conducted with a subgroup of consenting participants who were taking at least one medication that required an ancillary warning label about driving impairment. A complementary study of 75 French hospital outpatients was also conducted to further investigate the performance of the warnings. Not surprisingly, medication use among the Australian outpatient sample was high. The ancillary warning labels required to appear on medications that can impair driving were prevalent. A subgroup of participants was identified as being potentially at-risk of driving impaired, based on their reported recent use of medications requiring an ancillary warning label and level of driving activity. The sample reported previous behaviour and held future intentions that were consistent with warning label advice and health protective action. Participants did not express a particular need for being advised by a health professional regarding fitness to drive in relation to their medication. However, it was also apparent from the analysis that the participants would be significantly more likely to follow advice from a doctor than a pharmacist. High levels of knowledge in terms of general principles about effects of alcohol, illicit drugs and combinations of substances, and related health and crash risks were revealed. This may reflect a sample specific effect. Emphasis is placed in the professional guidelines for hospital pharmacists that make it essential that advisory labels are applied to medicines where applicable and that warning advice is given to all patients on medication which may affect driving (SHPA, 2006, p. 221). The research program applied selected theoretical constructs from Schwarzer's (1992) Health Action Process Approach, which has extended constructs from existing health theories such as the Theory of Planned Behavior (Ajzen, 1991) to better account for the intention-behaviour gap often observed when predicting behaviour. This was undertaken to explore the utility of the constructs in understanding and predicting compliance intentions and behaviour with the mandatory medication warning about driving impairment. This investigation revealed that the theoretical constructs related to intention and planning to avoid driving if an effect from the medication was noticed were useful. Not all the theoretical model constructs that had been demonstrated to be significant predictors in previous research on different health behaviours were significant in the present analyses. Positive outcome expectancies from avoiding driving were found to be important influences on forming the intention to avoid driving if an effect due to medication was noticed. In turn, intention was found to be a significant predictor of planning. Other selected theoretical constructs failed to predict compliance with the Australian warning label advice. It is possible that the limited predictive power of a number of constructs including risk perceptions is due to the small sample size obtained at follow up on which the evaluation is based. Alternately, it is possible that the theoretical constructs failed to sufficiently account for issues of particular relevance to the driving situation. The responses of the Australian hospital outpatient sample towards the Australian and French medication warning labels, which differed according to visual characteristics and warning message, were examined. In addition, a complementary study with a sample of French hospital outpatients was undertaken in order to allow general comparisons concerning the performance of the warnings. While a large amount of research exists concerning warning effectiveness, there is little research that has specifically investigated medication warnings relating to driving impairment. General established principles concerning factors that have been demonstrated to enhance warning noticeability and behavioural compliance have been extrapolated and investigated in the present study. The extent to which there is a need for education and improved health messages on this issue was a core issue of investigation in this thesis. Among the Australian sample, the size of the warning label and text, and red colour were the most visually important characteristics. The pictogram used in the French labels was also rated highly, and was salient for a large proportion of the sample. According to the study of French hospital outpatients, the pictogram was perceived to be the most important visual characteristic. Overall, the findings suggest that the Australian approach of using a combination of visual characteristics was important for the majority of the sample but that the use of a pictogram could enhance effects. A high rate of warning recall was found overall and a further important finding was that higher warning label recall was associated with increased number of medication classes taken. These results suggest that increased vigilance and care are associated with the number of medications taken and the associated repetition of the warning message. Significantly higher levels of risk perception were found for the French Level 3 (highest severity) label compared with the comparable mandatory Australian ancillary Label 1 warning. Participants' intentions related to the warning labels indicated that they would be more cautious while taking potentially impairing medication displaying the French Level 3 label compared with the Australian Label 1. These are potentially important findings for the Australian context regarding the current driving impairment warnings about displayed on medication. The findings raise other important implications for the Australian labelling context. An underlying factor may be the differences in the wording of the warning messages that appear on the Australian and French labels. The French label explicitly states "do not drive" while the Australian label states "if affected, do not drive", and the difference in responses may reflect that less severity is perceived where the situation involves the consumer's self-assessment of their impairment. The differences in the assignment of responsibility by the Australian (the consumer assesses and decides) and French (the doctor assesses and decides) approaches for the decision to drive while taking medication raises the core question of who is most able to assess driving impairment due to medication: the consumer, or the health professional? There are pros and cons related to knowledge, expertise and practicalities with either option. However, if the safety of the consumer is the primary aim, then the trend towards stronger risk perceptions and more consistent and cautious behavioural intentions in relation to the French label suggests that this approach may be more beneficial for consumer safety. The observations from the follow-up survey, although based on a small sample size and descriptive in nature, revealed that just over half of the sample recalled seeing a warning label about driving impairment on at least one of their medications. The majority of these respondents reported compliance with the warning advice. However, the results indicated variation in responses concerning alcohol intake and modifying the dose of medication or driving habits so that they could continue to drive, which suggests that the warning advice may not be having the desired impact. The findings of this research have implications for current countermeasures in this area. These have included enhancing the role that prescribing doctors have in providing warnings and advice to patients about the impact that their medication can have on driving, increasing consumer perceptions of the authority of pharmacists on this issue, and the reinforcement of the warning message. More broadly, it is suggested that there would be benefit in a wider dissemination of research-based information on increased crash risk and systematic monitoring and publicity about the representation of medications in crashes resulting in injuries and fatalities. Suggestions for future research concern the continued investigation of the effects of medications and interactions with existing medical conditions and other substances on driving skills, effects of variations in warning label design, individual behaviours and characteristics (particularly among those groups who are dependent upon prescription medication) and validation of consumer self-assessment of impairment.
Resumo:
Webb et al. (2009) described a late Pleistocenecoral sample wherein the diagenetic stabilization of original coral aragonite to meteoric calcite was halted more or less mid-way through the process, allowing direct comparison of pre-diagenetic and post-diagenetic microstructure and trace element distributions. Those authors found that the rare earth elements (REEs) were relatively stable during meteoric diagenesis, unlike divalent cations such as Sr,and it was thus concluded that original, in this case marine, REE distributions potentially could be preserved through the meteoric carbonate stabilization process that must have affected many, if not most, ancient limestones. Although this was not the case in the analysed sample, they noted that where such diagenesis took place in laterally transported groundwater, trace elements derived from that groundwater could be incorporated into diagenetic calcite, thus altering the initial REE distribution (Banner et al., 1988). Hence, the paper was concerned with the diagenetic behaviour of REEs in a groundwater-dominated karst system. The comment offered by Johannesson (2011) does not question those research results, but rather, seeks to clarify an interpretation made by Webb et al. (2009) of an earlier paper, Johannesson et al. (2006).
Resumo:
Although the Uniform Civil Procedure Rules 1999 (Qld) (UCPR) have always included a power for the court to order a party to pay an amount for costs to be fixed by the court, until recently the power was rarely used in the higher courts. In light of recent practice directions, and the changes to the procedures for assessment of costs contained in the new Chapter 17A of the UCPR, this is no longer the case. The judgment of Mullins J in ASIC v Atlantic 3 Financial (Aust) Pty Ltd [2008] QSC 9 provides some helpful guidance for practitioners about the principles which might be applied.
Resumo:
Background The number of middle-aged working individuals being diagnosed with cancer is increasing and so too will disruptions to their employment. The aim of the Working After Cancer Study is to examine the changes to work participation in the 12 months following a diagnosis of primary colorectal cancer. The study will identify barriers to work resumption, describe limitations on workforce participation, and evaluate the influence of these factors on health-related quality of life. Methods/Design An observational population-based study has been designed involving 260 adults newly-diagnosed with colorectal cancer between January 2010 and September 2011 and who were in paid employment at the time they were diagnosed. These cancer cases will be compared to a nationally representative comparison group of 520 adults with no history of cancer from the general population. Eligible cases will have a histologically confirmed diagnosis of colorectal cancer and will be identified through the Queensland Cancer Registry. Data on the comparison group will be drawn from the Household, Income and Labour Dynamics in Australia (HILDA) Survey. Data collection for the cancer group will occur at 6 and 12 months after diagnosis, with work questions also asked about the time of diagnosis, while retrospective data on the comparison group will be come from HILDA Waves 2009 and 2010. Using validated instruments administered via telephone and postal surveys, data will be collected on socio-demographic factors, work status and circumstances, and health-related quality of life (HRQoL) for both groups while the cases will have additional data collected on cancer treatment and symptoms, work productivity and cancer-related HRQoL. Primary outcomes include change in work participation at 12 months, time to work re-entry, work limitations and change in HRQoL status. Discussion This study will address the reasons for work cessation after cancer, the mechanisms people use to remain working and existing workplace support structures and the implications for individuals, families and workplaces. It may also provide key information for governments on productivity losses.
Resumo:
Traditional treatments for weight management have focussed on prescribed dietary restriction or regular exercise, or a combination of both. However recidivism for such prescribed treatments remains high, particularly among the overweight and obese. The aim of this thesis was to investigate voluntary dietary changes in the presence of prescribed mixed-mode exercise, conducted over 16 weeks. With the implementation of a single lifestyle change (exercise) it was postulated that the onerous burden of concomitant dietary and exercise compliance would be reduced, leading to voluntary lifestyle changes in such areas as diet. In addition, the failure of exercise as a single weight loss treatment has been reported to be due to compensatory energy intakes, although much of the evidence is from acute exercise studies, necessitating investigation of compensatory intakes during a long-term exercise intervention. Following 16 weeks of moderate intensity exercise, 30 overweight and obese (BMI≥25.00 kg.m-2) men and women showed small but statistically significant decreases in mean dietary fat intakes, without compensatory increases in other macronutrient or total energy intakes. Indeed total energy intakes were significantly lower for men and women following the exercise intervention, due to the decreases in dietary fat intakes. There was a risk that acceptance of the statistical validity of the small changes to dietary fat intakes may have constituted a Type 1 error, with false rejection of the Null hypothesis. Oro-sensory perceptions to changes in fat loads were therefore investigated to determine whether the measured dietary fat changes were detectable by the human palate. The ability to detect small changes in dietary fat provides sensory feedback for self-initiated dietary changes, but lean and overweight participants were unable to distinguish changes to fat loads of similar magnitudes to that measured in the exercise intervention study. Accuracy of the dietary measurement instrument was improved with the effects of random error (day-to-day variability) minimised with the use of a statistically validated 8-day, multiple-pass, 24 hour dietary recall instrument. However systematic error (underreporting) may have masked the magnitude of dietary change, particularly the reduction in dietary fat intakes. A purported biomarker (plasma Apolipoprotein A-IV) (apoA-IV) was subsequently investigated, to monitor systematic error in self-reported dietary intakes. Changes in plasma apoA-IV concentrations were directly correlated with increased and decreased changes to dietary fat intakes, suggesting that this objective marker may be a useful tool to improve the accuracy of dietary measurement in overweight and obese populations, who are susceptible to dietary underreporting.
Resumo:
The study of urban morphology has become an expanding field of research within the architectural discipline, providing theories to be used as tools in the understanding and design of urban landscapes from the past, the present and into the future. Drawing upon contemporary architectural design theory, this investigation reveals what a sectional analysis of an urban landscape can add to the existing research methods within this field. This paper conducts an enquiry into the use of the section as a tool for urban morphological analysis. Following the methodology of the British school of urban morphology, sections through the urban fabric of the case study city of Brisbane are compared. The results are categorised to depict changes in scale, components and utilisation throughout various timeframes. The key findings illustrate how the section, when read in conjunction with the plan can be used to interpret changes to urban form and the relationship that this has to the quality of the urban environment in the contemporary city.
Resumo:
Quality oriented management systems and methods have become the dominant business and governance paradigm. From this perspective, satisfying customers’ expectations by supplying reliable, good quality products and services is the key factor for an organization and even government. During recent decades, Statistical Quality Control (SQC) methods have been developed as the technical core of quality management and continuous improvement philosophy and now are being applied widely to improve the quality of products and services in industrial and business sectors. Recently SQC tools, in particular quality control charts, have been used in healthcare surveillance. In some cases, these tools have been modified and developed to better suit the health sector characteristics and needs. It seems that some of the work in the healthcare area has evolved independently of the development of industrial statistical process control methods. Therefore analysing and comparing paradigms and the characteristics of quality control charts and techniques across the different sectors presents some opportunities for transferring knowledge and future development in each sectors. Meanwhile considering capabilities of Bayesian approach particularly Bayesian hierarchical models and computational techniques in which all uncertainty are expressed as a structure of probability, facilitates decision making and cost-effectiveness analyses. Therefore, this research investigates the use of quality improvement cycle in a health vii setting using clinical data from a hospital. The need of clinical data for monitoring purposes is investigated in two aspects. A framework and appropriate tools from the industrial context are proposed and applied to evaluate and improve data quality in available datasets and data flow; then a data capturing algorithm using Bayesian decision making methods is developed to determine economical sample size for statistical analyses within the quality improvement cycle. Following ensuring clinical data quality, some characteristics of control charts in the health context including the necessity of monitoring attribute data and correlated quality characteristics are considered. To this end, multivariate control charts from an industrial context are adapted to monitor radiation delivered to patients undergoing diagnostic coronary angiogram and various risk-adjusted control charts are constructed and investigated in monitoring binary outcomes of clinical interventions as well as postintervention survival time. Meanwhile, adoption of a Bayesian approach is proposed as a new framework in estimation of change point following control chart’s signal. This estimate aims to facilitate root causes efforts in quality improvement cycle since it cuts the search for the potential causes of detected changes to a tighter time-frame prior to the signal. This approach enables us to obtain highly informative estimates for change point parameters since probability distribution based results are obtained. Using Bayesian hierarchical models and Markov chain Monte Carlo computational methods, Bayesian estimators of the time and the magnitude of various change scenarios including step change, linear trend and multiple change in a Poisson process are developed and investigated. The benefits of change point investigation is revisited and promoted in monitoring hospital outcomes where the developed Bayesian estimator reports the true time of the shifts, compared to priori known causes, detected by control charts in monitoring rate of excess usage of blood products and major adverse events during and after cardiac surgery in a local hospital. The development of the Bayesian change point estimators are then followed in a healthcare surveillances for processes in which pre-intervention characteristics of patients are viii affecting the outcomes. In this setting, at first, the Bayesian estimator is extended to capture the patient mix, covariates, through risk models underlying risk-adjusted control charts. Variations of the estimator are developed to estimate the true time of step changes and linear trends in odds ratio of intensive care unit outcomes in a local hospital. Secondly, the Bayesian estimator is extended to identify the time of a shift in mean survival time after a clinical intervention which is being monitored by riskadjusted survival time control charts. In this context, the survival time after a clinical intervention is also affected by patient mix and the survival function is constructed using survival prediction model. The simulation study undertaken in each research component and obtained results highly recommend the developed Bayesian estimators as a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances as well as industrial and business contexts. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The empirical results and simulations indicate that the Bayesian estimators are a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The advantages of the Bayesian approach seen in general context of quality control may also be extended in the industrial and business domains where quality monitoring was initially developed.
Resumo:
Introduction and objectives Early recognition of deteriorating patients results in better patient outcomes. Modified early warning scores (MEWS) attempt to identify deteriorating patients early so timely interventions can occur thus reducing serious adverse events. We compared frequencies of vital sign recording 24 h post-ICU discharge and 24 h preceding unplanned ICU admission before and after a new observation chart using MEWS and an associated educational programme was implemented into an Australian Tertiary referral hospital in Brisbane. Design Prospective before-and-after intervention study, using a convenience sample of ICU patients who have been discharged to the hospital wards, and in patients with an unplanned ICU admission, during November 2009 (before implementation; n = 69) and February 2010 (after implementation; n = 70). Main outcome measures Any change in a full set or individual vital sign frequency before-and-after the new MEWS observation chart and associated education programme was implemented. A full set of vital signs included Blood pressure (BP), heart rate (HR), temperature (T°), oxygen saturation (SaO2) respiratory rate (RR) and urine output (UO). Results After the MEWS observation chart implementation, we identified a statistically significant increase (210%) in overall frequency of full vital sign set documentation during the first 24 h post-ICU discharge (95% CI 148, 288%, p value <0.001). Frequency of all individual vital sign recordings increased after the MEWS observation chart was implemented. In particular, T° recordings increased by 26% (95% CI 8, 46%, p value = 0.003). An increased frequency of full vital sign set recordings for unplanned ICU admissions were found (44%, 95% CI 2, 102%, p value = 0.035). The only statistically significant improvement in individual vital sign recordings was urine output, demonstrating a 27% increase (95% CI 3, 57%, p value = 0.029). Conclusions The implementation of a new MEWS observation chart plus a supporting educational programme was associated with statistically significant increases in frequency of combined and individual vital sign set recordings during the first 24 h post-ICU discharge. There were no significant changes to frequency of individual vital sign recordings in unplanned admissions to ICU after the MEWS observation chart was implemented, except for urine output. Overall increases in the frequency of full vital sign sets were seen.
Resumo:
The term gamification describes the addition of game elements to non-game contexts as a means to motivate and engage users. This study investigates the design, delivery and pilot evaluation of a gamified, smartphone application built to introduce new students to the campus, services and people at university during their first few weeks. This paper describes changes to the application made after an initial field study was undertaken and provides an evaluation of the impact of the redesign. Survey responses were collected from thirteen students and usage data was captured from 105 students. Results indicate three levels of user engagement and suggest that there is value in adding game elements to the experience in this way. A number of issues are identified and discussed based on game challenges, input, and facilitating game elements in an event setting such as university orientation.
Resumo:
This submission addresses the following terms of reference: 1) the nature, prevalence and level of cybersafety risks and threats experienced by senior Australians; 2) the impact and implications of those risks and threats on access and use of information and communication technologies by senior Australians; 3) the adequacy and effectiveness of current government and industry initiatives to respond to those threats, including education initiatives aimed at senior Australians; 4) best practice safeguards, and any possible changes to Australian law, policy or practice that will strengthen the cybersafety of senior Australians.
Resumo:
The role of individual ocular tissues in mediating changes to the sclera during myopia development is unclear. The aim of this study was to examine the effects of retina, RPE and choroidal tissues from myopic and hyperopic chick eyes on the DNA and glycosaminoglycan (GAG) content in cultures of chick scleral fibroblasts. Primary cultures of fibroblastic cells expressing vimentin and -smooth muscle actin were established in serum-supplemented growth medium from 8-day-old normal chick sclera. The fibroblasts were subsequently co-cultured with posterior eye cup tissue (full thickness containing retina, RPE and choroid) obtained from untreated eyes and eyes wearing translucent diffusers (form-deprivation myopia, FDM) or -15D lenses (lens-induced myopia, LIM) for 3 days (post hatch day 5 to 8) (n=6 per treatment group). The effect of tissues (full thickness and individual retina, RPE, and choroid layers) from -15D (LIM) versus +15D (lens-induced hyperopia, LIH) treated eyes was also determined. Refraction changes in the direction predicted by the visual treatments were confirmed by retinoscopy prior to tissue collection. Glycosaminoglycan (GAG) and DNA content of the scleral fibroblast cultures were measured using GAG and PicoGreen assays. There was no significant difference in the effect of full thickness tissue from either FDM or LIM treated eyes on DNA and GAG content of scleral fibroblasts (DNA 8.9±2.6 µg and 8.4±1.1 µg, p=0.12; GAG 11.2±0.6 µg and 10.1±1.0 µg, p=0.34). Retina from LIM eyes did not alter fibroblast DNA or GAG content compared to retina from LIH eyes (DNA 27.2±1.7 µg versus 23.2±1.5 µg, p=0.21; GAG 28.1±1.7 µg versus. 28.7±1.2 µg, p=0.46). Similarly, the choroid from LIH and LIM eyes did not produce a differential effect on DNA content (DNA, LIM 46.9±6.4 versus LIH 51.5±4.7 µg, p=0.31), whereas GAG content was higher for cells in co-culture with choroid from LIH eyes (GAG 32.5±0.7 µg versus 18.9±1.2 µg, F1,6=9.210, p=0.0002). In contrast, fibroblast DNA was greater in co-culture with RPE from LIM eyes than the empty basket and DNA content less for co-culture with RPE from LIH eyes (LIM: 72.4±6.3 µg versus Empty basket: 46.03±1.0 µg; F1,6=69.99, p=0.0005 and LIH: 27.9±2.3 µg versus empty basket: 46.03±1.0 µg; p=0.0004). GAG content was higher with RPE from LIH eyes (LIH: 33.7±1.9 µg versus empty basket: 29.5±0.8 µg, F1,6=13.99, p=0.010) and lower with RPE from LIM eyes (LIM: 27.7±0.9 µg versus empty basket: 29.5±0.8 µg, p=0.021). GAG content of cells in co-culture with choroid from LIH eyes was higher compared to co-culture with choroid from LIM eyes (32.5±0.7 µg versus 18.9±1.2 µg respectively, F1,6=9.210, p=0.0002). In conclusion, these experiments provide evidence for a directional growth signal that is present (and remains) in the ex-vivo RPE, but that does not remain in the ex-vivo retina. The identity of this factor(s) that can modify scleral cell DNA and GAG content requires further research.
Resumo:
Intracellular Flightless I (Flii), a gelsolin family member, has been found to have roles modulating actin regulation, transcriptional regulation and inflammation. In vivo Flii can regulate wound healing responses. We have recently shown that a pool of Flii is secreted by fibroblasts and macrophages, cells typically found in wounds, and its secretion can be upregulated upon wounding. We show that secreted Flii can bind to the bacterial cell wall component lipopolysaccharide and has the potential to regulate inflammation. We now show that secreted Flii is present in both acute and chronic wound fluid.
Resumo:
Concern to ensure that all children have access to high quality educational experiences in the early years of life has instigated moves to increase qualifications of staff in the childcare workforce, in particular to increase the number of degree qualified teachers. However existing data suggest that work in the childcare sector is viewed less favourably by those undertaking early childhood education degrees. For most, childcare is not a preferred place of employment. This study asked whether a practicum in a childcare setting would improve attitudes to childcare and willingness to consider working in childcare settings. In a study of a cohort of Bachelor of Education (Early Childhood) students, measures of attitudes to childcare and willingness to work in childcare were taken before and after practicum. Additionally students provided accounts of their practicum experiences. Results indicate a trend in which there was a group increase in positive attitudes and willingness to consider work in childcare but considerable individual differences influenced by the quality of the practicum experience. The relationship with, and model provided by, centre directors and the group leader in the practicum class was identified as key influencing factors. Results are discussed in term of models of pedagogical leadership.
Resumo:
Australia has new national legislation - the Personal Property Securities Act 2009 (Cth) and the Personal Property Securities Regulations 2010 – which commenced operation on 30 January 2012. The policy objectives of the new legislation are to increase certainty and consistency and to reduce complexity and cost. To achieve this, the legislation treats like transactions alike, by focusing on substance over form, and so removes distinctions between security interests which have been based on their structure. Differences based on the location or nature of the secured property and the debtor’s legal form, as an individual or company, have also disappeared. We now have one single national scheme and one national electronic registration system for all security interests throughout Australia. The Act applies to security interests in tangible and intangible personal property, including those based on some form of title retention which are not security interests under the general law. This legislation rationalises previous laws and bring about substantial changes to this area of law. This paper seeks to explain the principal changes and their implications.
Resumo:
Many construction industry decision-makers believe there is a lack of off-site manufacture (OSM) adoption for non-residential construction in Australia. Identification of construction business process was considered imperative in order to assist decision-makers to increase OSM utilisation. The premise that domain knowledge can be re-used to provide an intervention point in the construction process led a team of researchers to construct simple base-line process models for the complete construction process, segmented into six phases. Sixteen domain knowledge industry experts were asked to review the construction phase base-line models to answer the question “Where in the process illustrated by this base-line model phase is an OSM task?”. Through an iterative and generative process a number of off-site manufacture intervention points were identified and integrated into the process models. The re-use of industry expert domain knowledge provided suggestions for new ways to do basic tasks thus facilitating changes to current practice. It is expected that implementation of the new processes will lead to systemic industry change and thus a growth in productivity due to increased adoption of OSM.