818 resultados para Ongoing concessions
Resumo:
Web 2.0 is a new generation of online applications on the web that permit people to collaborate and share information online. The use of such applications by employees in organisations enhances knowledge management (KM) in organisations. Employee involvement is a critical success factor as the concept is based on openness, engagement and collaboration between people where organizational knowledge is derived from employees experience, skills and best practices. Consequently, the employee's perception is recognized as being an important factor in web 2.0 adoption for KM and worthy of investigation. There are few studies that define and explore employee's enterprise 2.0 acceptance for KM. This paper provides a systematic review of the literature prior to demonstrating the findings as part of a preliminary conceptual model that represents the first stage of an ongoing research project that will end up with an empirical study. Reviewing available studies in technology acceptance, knowledge management and enterprise 2.0 literatures aids obtaining all potential user acceptance factors of enterprise 2.0. The preliminary conceptual model is a refinement of the theory of planed behaviour (TPB) as the user acceptance factors has been mapped into the TPB main components including behaviour attitude, subjective norms and behaviour control which are the determinant of individual's intention to a particular behaviour.
Resumo:
Australia’s domestic income tax legislation and double tax agreements contain transfer pricing rules which are designed to counter the underpayment of tax by businesses engaged in international dealings between related parties. The current legislation and agreements require that related party transactions take place at a value which reflects an arm’s length price, that is, a price which would be charged between unrelated parties. For a host of reasons, it is increasingly difficult for multinational entities to demonstrate that they are transferring goods and services at a price which is reflective of the behaviour of independent parties, thereby making it difficult to demonstrate compliance with the relevant legislation. Further, where an Australian business undertakes cross-border related party transactions there is the risk of an audit by the Australian Tax Office (ATO). If a business wishes to avoid the risk of an audit, and any ensuing penalties, there is one option: an advance pricing arrangement (APA). An APA is an agreement whereby the future transfer pricing methodology to be used to determine the arm’s length price is agreed to by the taxpayer and the relevant tax authority or authorities. The ATO views the APA process as an important part of its international tax strategy and believes that there are complementary benefits provided to both the taxpayer and the ATO. The ATO promotes the APA process on the basis of creating greater certainty for all parties while reducing compliance costs and the risk of audit and penalty. While the ATO regards the APA system as a success, it may be argued that the implementation of such a system is simply a practical solution to an ongoing problem of an inherent failure in both the legislation and ATO interpretation and application of this legislation to provide certainty to the taxpayer. This paper investigates the use of APAs as a solution to the problem of transfer pricing and considers whether they are the success the ATO claims. It is argued that there is no doubt that APAs provide a valuable practical tool for multinational entities facing the challenges of the taxation of global trading under the current transfer pricing regime. It does not, however, provide a long term solution. Rather, the long term solution may be in the form of legislative amendment.
Resumo:
In the late twentieth and early twenty-first centuries, Australia’s relationship with its Asian neighbours has been the subject of ongoing aesthetic, cultural and political contestations. As Alison Richards has noted, Australia’s colonial legacy, its Asia-Pacific location, and its ‘white’ self-perception have always made Australia’s relations with Asia fraught. In the latter part of the twentieth century, the paradoxes inherent in Australia’s relationships with and within the Asian region became a dominant theme in debates about nation, nationhood and identity, and prompted a shift in the construction of ‘Asianness’ on Australian stages. On the one hand, anxiety about the multicultural policy of the 1970s and 1980s, and then Prime Minister Paul Keating’s push for greater economic, cultural and artistic exchange with Asia via policies such as the Creative Nation Cultural Policy (1994), saw large numbers of Australians latch on to the reactionary, racist politics of Pauline Hanson’s One Nation Party. As Jacqueline Lo has argued, in this period Asian-Australians were frequently represented as an unassimilable Other, a threat to Australia’s ‘white’ identity, and to individual Australians’ jobs and opportunities. On the other hand, during the same period, a desire to counter the racism in Australian culture, and develop a ‘voice’ that would distinguish Australian cultural products from European theatrical traditions, combined with the new opportunities for cross-cultural exchange that came with the Creative Nation Cultural Policy to produce what Helen Gilbert and Jacqueline Lo have characterised as an Asian turn in Australian theatre...
Resumo:
This article is a response to Kim Dalton's 2011 Henry Mayer Lecture. It focuses on Dalton's discussion of Australian content in the context of the government's ongoing Convergence Review.
Resumo:
It would be a rare thing to visit an early years setting or classroom in Australia that does not display examples of young children’s artworks. This practice serves to give schools a particular ‘look’, but is no guarantee of quality art education. The Australian National Review of Visual Arts Education (NRVE) (2009) has called for changes to visual art education in schools. The planned new National Curriculum includes the arts (music, dance, drama, media and visual arts) as one of the five learning areas. Research shows that it is the classroom teacher that makes the difference, and teacher education has a large part to play in reforms to art education. This paper provides an account of one foundation unit of study (Unit 1) for first year university students enrolled in a 4-year Bachelor degree program who are preparing to teach in the early years (0–8 years). To prepare pre-service teachers to meet the needs of children in the 21st century, Unit 1 blends old and new ways of seeing art, child and pedagogy. Claims for the effectiveness of this model are supported with evidence-based research, conducted over the six years of iterations and ongoing development of Unit 1.
Resumo:
During the late 20th century it was proposed that a design aesthetic reflecting current ecological concerns was required within the overall domain of the built environment and specifically within landscape design. To address this, some authors suggested various theoretical frameworks upon which such an aesthetic could be based. Within these frameworks there was an underlying theme that the patterns and processes of Nature may have the potential to form this aesthetic — an aesthetic based on fractal rather than Euclidean geometry. In order to understand how fractal geometry, described as the geometry of Nature, could become the referent for a design aesthetic, this research examines the mathematical concepts of fractal Geometry, and the underlying philosophical concepts behind the terms ‘Nature’ and ‘aesthetics’. The findings of this initial research meant that a new definition of Nature was required in order to overcome the barrier presented by the western philosophical Nature¯culture duality. This new definition of Nature is based on the type and use of energy. Similarly, it became clear that current usage of the term aesthetics has more in common with the term ‘style’ than with its correct philosophical meaning. The aesthetic philosophy of both art and the environment recognises different aesthetic criteria related to either the subject or the object, such as: aesthetic experience; aesthetic attitude; aesthetic value; aesthetic object; and aesthetic properties. Given these criteria, and the fact that the concept of aesthetics is still an active and ongoing philosophical discussion, this work focuses on the criteria of aesthetic properties and the aesthetic experience or response they engender. The examination of fractal geometry revealed that it is a geometry based on scale rather than on the location of a point within a three-dimensional space. This enables fractal geometry to describe the complex forms and patterns created through the processes of Wild Nature. Although fractal geometry has been used to analyse the patterns of built environments from a plan perspective, it became clear from the initial review of the literature that there was a total knowledge vacuum about the fractal properties of environments experienced every day by people as they move through them. To overcome this, 21 different landscapes that ranged from highly developed city centres to relatively untouched landscapes of Wild Nature have been analysed. Although this work shows that the fractal dimension can be used to differentiate between overall landscape forms, it also shows that by itself it cannot differentiate between all images analysed. To overcome this two further parameters based on the underlying structural geometry embedded within the landscape are discussed. These parameters are the Power Spectrum Median Amplitude and the Level of Isotropy within the Fourier Power Spectrum. Based on the detailed analysis of these parameters a greater understanding of the structural properties of landscapes has been gained. With this understanding, this research has moved the field of landscape design a step close to being able to articulate a new aesthetic for ecological design.
Resumo:
For over half a century, it has been known that the rate of morphological evolution appears to vary with the time frame of measurement. Rates of microevolutionary change, measured between successive generations, were found to be far higher than rates of macroevolutionary change inferred from the fossil record. More recently, it has been suggested that rates of molecular evolution are also time dependent, with the estimated rate depending on the timescale of measurement. This followed surprising observations that estimates of mutation rates, obtained in studies of pedigrees and laboratory mutation-accumulation lines, exceeded long-term substitution rates by an order of magnitude or more. Although a range of studies have provided evidence for such a pattern, the hypothesis remains relatively contentious. Furthermore, there is ongoing discussion about the factors that can cause molecular rate estimates to be dependent on time. Here we present an overview of our current understanding of time-dependent rates. We provide a summary of the evidence for time-dependent rates in animals, bacteria and viruses. We review the various biological and methodological factors that can cause rates to be time dependent, including the effects of natural selection, calibration errors, model misspecification and other artefacts. We also describe the challenges in calibrating estimates of molecular rates, particularly on the intermediate timescales that are critical for an accurate characterization of time-dependent rates. This has important consequences for the use of molecular-clock methods to estimate timescales of recent evolutionary events.
Resumo:
The opening phrase of the title is from Charles Darwin’s notebooks (Schweber 1977). It is a double reminder, firstly that mainstream evolutionary theory is not just about describing nature but is particularly looking for mechanisms or ‘causes’, and secondly, that there will usually be several causes affecting any particular outcome. The second part of the title is our concern at the almost universal rejection of the idea that biological mechanisms are sufficient for macroevolutionary changes, thus rejecting a cornerstone of Darwinian evolutionary theory. Our primary aim here is to consider ways of making it easier to develop and to test hypotheses about evolution. Formalizing hypotheses can help generate tests. In an absolute sense, some of the discussion by scientists about evolution is little better than the lack of reasoning used by those advocating intelligent design. Our discussion here is in a Popperian framework where science is defined by that area of study where it is possible, in principle, to find evidence against hypotheses – they are in principle falsifiable. However, with time, the boundaries of science keep expanding. In the past, some aspects of evolution were outside the current boundaries of falsifiable science, but increasingly new techniques and ideas are expanding the boundaries of science and it is appropriate to re-examine some topics. It often appears that over the last few decades there has been an increasingly strong assumption to look first (and only) for a physical cause. This decision is virtually never formally discussed, just an assumption is made that some physical factor ‘drives’ evolution. It is necessary to examine our assumptions much more carefully. What is meant by physical factors ‘driving’ evolution, or what is an ‘explosive radiation’. Our discussion focuses on two of the six mass extinctions, the fifth being events in the Late Cretaceous, and the sixth starting at least 50,000 years ago (and is ongoing). Cretaceous/Tertiary boundary; the rise of birds and mammals. We have had a long-term interest (Cooper and Penny 1997) in designing tests to help evaluate whether the processes of microevolution are sufficient to explain macroevolution. The real challenge is to formulate hypotheses in a testable way. For example the numbers of lineages of birds and mammals that survive from the Cretaceous to the present is one test. Our first estimate was 22 for birds, and current work is tending to increase this value. This still does not consider lineages that survived into the Tertiary, and then went extinct later. Our initial suggestion was probably too narrow in that it lumped four models from Penny and Phillips (2004) into one model. This reduction is too simplistic in that we need to know about survival and ecological and morphological divergences during the Late Cretaceous, and whether Crown groups of avian or mammalian orders may have existed back into the Cretaceous. More recently (Penny and Phillips 2004) we have formalized hypotheses about dinosaurs and pterosaurs, with the prediction that interactions between mammals (and groundfeeding birds) and dinosaurs would be most likely to affect the smallest dinosaurs, and similarly interactions between birds and pterosaurs would particularly affect the smaller pterosaurs. There is now evidence for both classes of interactions, with the smallest dinosaurs and pterosaurs declining first, as predicted. Thus, testable models are now possible. Mass extinction number six: human impacts. On a broad scale, there is a good correlation between time of human arrival, and increased extinctions (Hurles et al. 2003; Martin 2005; Figure 1). However, it is necessary to distinguish different time scales (Penny 2005) and on a finer scale there are still large numbers of possibilities. In Hurles et al. (2003) we mentioned habitat modification (including the use of Geogenes III July 2006 31 fire), introduced plants and animals (including kiore) in addition to direct predation (the ‘overkill’ hypothesis). We need also to consider prey switching that occurs in early human societies, as evidenced by the results of Wragg (1995) on the middens of different ages on Henderson Island in the Pitcairn group. In addition, the presence of human-wary or humanadapted animals will affect the distribution in the subfossil record. A better understanding of human impacts world-wide, in conjunction with pre-scientific knowledge will make it easier to discuss the issues by removing ‘blame’. While continued spontaneous generation was accepted universally, there was the expectation that animals continued to reappear. New Zealand is one of the very best locations in the world to study many of these issues. Apart from the marine fossil record, some human impact events are extremely recent and the remains less disrupted by time.
Resumo:
Background Screening tests of basic cognitive status or ‘mental state’ have been shown to predict mortality and functional outcomes in adults. This study examined the relationship between mental state and outcomes in children with type 1 diabetes. Objective We aimed to determine whether mental state at diagnosis predicts longer term cognitive function of children with a new diagnosis of type 1 diabetes. Methods Mental state of 87 patients presenting with newly diagnosed type 1 diabetes was assessed using the School-Years Screening Test for the Evaluation of Mental Status. Cognitive abilities were assessed 1 wk and 6 months postdiagnosis using standardized tests of attention, memory, and intelligence. Results Thirty-seven children (42.5%) had reduced mental state at diagnosis. Children with impaired mental state had poorer attention and memory in the week following diagnosis, and, after controlling for possible confounding factors, significantly lower IQ at 6 months compared to those with unimpaired mental state (p < 0.05). Conclusions Cognition is impaired acutely in a significant number of children presenting with newly diagnosed type 1 diabetes. Mental state screening is an effective method of identifying children at risk of ongoing cognitive difficulties in the days and months following diagnosis. Clinicians may consider mental state screening for all newly diagnosed diabetic children to identify those at risk of cognitive sequelae.
Resumo:
The kaolinite intercalation and its application in polymer-based functional composites have attracted great interest, both in industry and in academia fields, since they frequently exhibit remarkable improvements in materials properties compared with the virgin polymer or conventional micro and macro-composites. Also of significant interest regarding the kaolinite intercalation complex is its thermal behavior and decomposition. This is because heating treatment of intercalated kaolinite is necessary for its further application, especially in the field of plastic and rubber industry. Although intercalation of kaolinite is an old and ongoing research topic, there is a limited knowledge available on kaolinite intercalation with different reagents, the mechanism of intercalation complex formation as well as on thermal behavior and phase transition. This review attempts to summarize the most recent achievements in the thermal behavior study of kaolinite intercalation complexes obtained with the most common reagents including potassium acetate, formamide, dimethyl sulfoxide, hydrazine and urea. At the end of this paper, the further work on kaolinite intercalation complex was also proposed.
Resumo:
Advanced prostate cancer is a common and generally incurable disease. Androgen deprivation therapy is used to treat advanced prostate cancer with good benefits to quality of life and regression of disease. Prostate cancer invariably progresses however despite ongoing treatment, to a castrate resistant state. Androgen deprivation is associated with a form of metabolic syndrome, which includes insulin resistance and hyperinsulinaemia. The mitogenic and anti-apoptotic properties of insulin acting through the insulin and hybrid insulin/IGF-1 receptors seem to have positive effects on prostate tumour growth. This pilot study was designed to assess any correlation between elevated insulin levels and progression to castrate resistant prostate cancer. Methods: 36 men receiving ADT for advanced prostate cancer were recruited, at various stages of their treatment, along with 47 controls, men with localised prostate cancer pre-treatment. Serum measurements of C-peptide (used as a surrogate marker for insulin production) were performed and compared between groups. Correlation between serum C-peptide level and time to progression to castrate resistant disease was assessed. Results: There was a significant elevation of C-peptide levels in the ADT group (mean = 1639pmol/L)) compared to the control group (mean = 1169pmol/L), with a p-value of 0.025. In 17 men with good initial response to androgen deprivation, a small negative trend towards earlier progression to castrate resistance with increasing C-peptide level was seen in the ADT group (r = -0.050), however this did not reach statistical significance (p>0.1). Conclusions: This pilot study confirms an increase in serum C-peptide levels in men receiving ADT for advance prostate cancer. A non-significant, but negative trend towards earlier progression to castrate resistance with increasing C-peptide suggests the need for a formal prospective study assessing this hypothesis.
Resumo:
An integral part of teaching and a principle underpinning professional practice in the early years is the importance of reflecting on and researching our own practice. For example, in Australia, the Early Years Learning Framework: Belonging, Being and Becoming identifies “ongoing learning and reflective practice” (DEEWR, 2009, p. 13) as one of the five principles distilled from theories and research evidence that underpin professional practice in the early years. Recognising teaching as encompassing the role of researching pedagogical practice highlights that teaching is not simply practical or procedural but requires intellectual work. This chapter details evidence based practice (EBP) in early years education and highlights four questions: 1. What is evidence based practice?; 2. What evidence do I draw on?; 3. How might I discern relevant evidence?; and 4. What is my part in generating research evidence?
Resumo:
Within early childhood education two ideas are firmly held: play is the best way for children to learn and parents are partners in the child’s learning. While these ideas have been explored, limited research to date has investigated the confluence of the two - how parents of young children view the concept of play. This paper investigates parents’ views on play by analysing the views of small group of parents of Prep Year children in Queensland, Australia. The parents in this study held varying definitions of what constitutes play, and complex and contradictory notions of its value. Positive views of play were linked to learning without knowing it, engaging in hands-on activities, and preparation for Year One through a strong focus on academic progress. Some parents held that Prep was play-based, while others did not. The complexities and diversity of parental opinion in this study echo the ongoing commentary about how play ought to be defined. Moreover, the notion that adults may interpret play in different ways is also reflected here. The authors suggest that for early childhood educators these complexities require an ongoing engagement, debate and reconceptualisation of the place of play in light of broader curricular and socio-political agendas.
Resumo:
While the engagement, success and retention of first year students are ongoing issues in higher education, they are currently of considerable and increasing importance as the pressures on teaching and learning from the new standards framework and performance funding intensifies. This Nuts & Bolts presentation introduces the concept of a maturity model and its application to the assessment of the capability of higher education institutions to address student engagement, success and retention. Participants will be provided with (a) a concise description of the concept and features of a maturity model; and (b) the opportunity to explore the potential application of maturity models (i) to the management of student engagement and retention programs and strategies within an institution and (ii) to the improvement of these features by benchmarking across the sector.
Resumo:
Nutrition interventions in the form of both self-management education and individualised diet therapy are considered essential for the long-term management of type 2 diabetes mellitus (T2DM). The measurement of diet is essential to inform, support and evaluate nutrition interventions in the management of T2DM. Barriers inherent within health care settings and systems limit ongoing access to personnel and resources, while traditional prospective methods of assessing diet are burdensome for the individual and often result in changes in typical intake to facilitate recording. This thesis investigated the inclusion of information and communication technologies (ICT) to overcome limitations to current approaches in the nutritional management of T2DM, in particular the development, trial and evaluation of the Nutricam dietary assessment method (NuDAM) consisting of a mobile phone photo/voice application to assess nutrient intake in a free-living environment with older adults with T2DM. Study 1: Effectiveness of an automated telephone system in promoting change in dietary intake among adults with T2DM The effectiveness of an automated telephone system, Telephone-Linked Care (TLC) Diabetes, designed to deliver self-management education was evaluated in terms of promoting dietary change in adults with T2DM and sub-optimal glycaemic control. In this secondary data analysis independent of the larger randomised controlled trial, complete data was available for 95 adults (59 male; mean age(±SD)=56.8±8.1 years; mean(±SD)BMI=34.2±7.0kg/m2). The treatment effect showed a reduction in total fat of 1.4% and saturated fat of 0.9% energy intake, body weight of 0.7 kg and waist circumference of 2.0 cm. In addition, a significant increase in the nutrition self-efficacy score of 1.3 (p<0.05) was observed in the TLC group compared to the control group. The modest trends observed in this study indicate that the TLC Diabetes system does support the adoption of positive nutrition behaviours as a result of diabetes self-management education, however caution must be applied in the interpretation of results due to the inherent limitations of the dietary assessment method used. The decision to use a close-list FFQ with known bias may have influenced the accuracy of reporting dietary intake in this instance. This study provided an example of the methodological challenges experienced with measuring changes in absolute diet using a FFQ, and reaffirmed the need for novel prospective assessment methods capable of capturing natural variance in usual intakes. Study 2: The development and trial of NuDAM recording protocol The feasibility of the Nutricam mobile phone photo/voice dietary record was evaluated in 10 adults with T2DM (6 Male; age=64.7±3.8 years; BMI=33.9±7.0 kg/m2). Intake was recorded over a 3-day period using both Nutricam and a written estimated food record (EFR). Compared to the EFR, the Nutricam device was found to be acceptable among subjects, however, energy intake was under-recorded using Nutricam (-0.6±0.8 MJ/day; p<0.05). Beverages and snacks were the items most frequently not recorded using Nutricam; however forgotten meals contributed to the greatest difference in energy intake between records. In addition, the quality of dietary data recorded using Nutricam was unacceptable for just under one-third of entries. It was concluded that an additional mechanism was necessary to complement dietary information collected via Nutricam. Modifications to the method were made to allow for clarification of Nutricam entries and probing forgotten foods during a brief phone call to the subject the following morning. The revised recording protocol was evaluated in Study 4. Study 3: The development and trial of the NuDAM analysis protocol Part A explored the effect of the type of portion size estimation aid (PSEA) on the error associated with quantifying four portions of 15 single foods items contained in photographs. Seventeen dietetic students (1 male; age=24.7±9.1 years; BMI=21.1±1.9 kg/m2) estimated all food portions on two occasions: without aids and with aids (food models or reference food photographs). Overall, the use of a PSEA significantly reduced mean (±SD) group error between estimates compared to no aid (-2.5±11.5% vs. 19.0±28.8%; p<0.05). The type of PSEA (i.e. food models vs. reference food photograph) did not have a notable effect on the group estimation error (-6.7±14.9% vs. 1.4±5.9%, respectively; p=0.321). This exploratory study provided evidence that the use of aids in general, rather than the type, was more effective in reducing estimation error. Findings guided the development of the Dietary Estimation and Assessment Tool (DEAT) for use in the analysis of the Nutricam dietary record. Part B evaluated the effect of the DEAT on the error associated with the quantification of two 3-day Nutricam dietary records in a sample of 29 dietetic students (2 males; age=23.3±5.1 years; BMI=20.6±1.9 kg/m2). Subjects were randomised into two groups: Group A and Group B. For Record 1, the use of the DEAT (Group A) resulted in a smaller error compared to estimations made without the tool (Group B) (17.7±15.8%/day vs. 34.0±22.6%/day, p=0.331; respectively). In comparison, all subjects used the DEAT to estimate Record 2, with resultant error similar between Group A and B (21.2±19.2%/day vs. 25.8±13.6%/day; p=0.377 respectively). In general, the moderate estimation error associated with quantifying food items did not translate into clinically significant differences in the nutrient profile of the Nutricam dietary records, only amorphous foods were notably over-estimated in energy content without the use of the DEAT (57kJ/day vs. 274kJ/day; p<0.001). A large proportion (89.6%) of the group found the DEAT helpful when quantifying food items contained in the Nutricam dietary records. The use of the DEAT reduced quantification error, minimising any potential effect on the estimation of energy and macronutrient intake. Study 4: Evaluation of the NuDAM The accuracy and inter-rater reliability of the NuDAM to assess energy and macronutrient intake was evaluated in a sample of 10 adults (6 males; age=61.2±6.9 years; BMI=31.0±4.5 kg/m2). Intake recorded using both the NuDAM and a weighed food record (WFR) was coded by three dietitians and compared with an objective measure of total energy expenditure (TEE) obtained using the doubly labelled water technique. At the group level, energy intake (EI) was under-reported to a similar extent using both methods, with the ratio of EI:TEE was 0.76±0.20 for the NuDAM and 0.76±0.17 for the WFR. At the individual level, four subjects reported implausible levels of energy intake using the WFR method, compared to three using the NuDAM. Overall, moderate to high correlation coefficients (r=0.57-0.85) were found across energy and macronutrients except fat (r=0.24) between the two dietary measures. High agreement was observed between dietitians for estimates of energy and macronutrient derived for both the NuDAM (ICC=0.77-0.99; p<0.001) and WFR (ICC=0.82-0.99; p<0.001). All subjects preferred using the NuDAM over the WFR to record intake and were willing to use the novel method again over longer recording periods. This research program explored two novel approaches which utilised distinct technologies to aid in the nutritional management of adults with T2DM. In particular, this thesis makes a significant contribution to the evidence base surrounding the use of PhRs through the development, trial and evaluation of a novel mobile phone photo/voice dietary record. The NuDAM is an extremely promising advancement in the nutritional management of individuals with diabetes and other chronic conditions. Future applications lie in integrating the NuDAM with other technologies to facilitate practice across the remaining stages of the nutrition care process.