424 resultados para current account cyclicality
Resumo:
Each year The Australian Centre for Philanthropy and Nonprofit Studies (ACPNS) at QUT analyses statistics on tax-deductible donations made by Australians in their individual income tax returns to Deductible Gift Recipients (DGRs). The information presented below is based on the amount and type of tax-deductible donations made by Australian taxpayers to DGRs for the period 1 July 2010 to 30 June 2011 extracted from the Australian Taxation Office's publication Taxation Statistics 2010-2011.1
Resumo:
Educational and developmental psychology faces a number of current and future challenges and opportunities in Australia. In this commentary we consider the identity of educational and developmental psychology in terms of the features that distinguish it from other specialisations, and address issues related to training, specialist endorsement, supervision and rebating under the Australian government's Medicare system. The current status of training in Australia is considered through a review of the four university programs in educational and developmental psychology currently offered, and the employment destinations of their graduates. Although the need for traditional services in settings such as schools, hospitals, disability and community organisations will undoubtedly continue, the role of educational and developmental psychologists is being influenced and to some extent redefined by advances in technology, medicine, genetics, and neuroscience. We review some of these advances and conclude with recommendations for training and professional development that will enable Australian educational and developmental psychologists to meet the challenges ahead.
Resumo:
We investigated critical belief-based targets for promoting the introduction of solid foods to infants at six months. First-time mothers (N = 375) completed a Theory of Planned Behaviour belief-based questionnaire and follow-up questionnaire assessing the age the infant was first introduced to solids. Normative beliefs about partner/spouse (β = 0.16) and doctor (β = 0.22), and control beliefs about commercial baby foods available for infants before six months (β = −0.20), predicted introduction of solids at six months. Intervention programs should target these critical beliefs to promote mothers’ adherence to current infant feeding guidelines to introduce solids at around six months.
Resumo:
It is exciting to be living at a time when the big questions in biology can be investigated using modern genetics and computing [1]. Bauzà-Ribot et al.[2] take on one of the fundamental drivers of biodiversity, the effect of continental drift in the formation of the world’s biota 3 and 4, employing next-generation sequencing of whole mitochondrial genomes and modern Bayesian relaxed molecular clock analysis. Bauzà-Ribot et al.[2] conclude that vicariance via plate tectonics best explains the genetic divergence between subterranean metacrangonyctid amphipods currently found on islands separated by the Atlantic Ocean. This finding is a big deal in biogeography, and science generally [3], as many other presumed biotic tectonic divergences have been explained as probably due to more recent transoceanic dispersal events [4]. However, molecular clocks can be problematic 5 and 6 and we have identified three issues with the analyses of Bauzà-Ribot et al.[2] that cast serious doubt on their results and conclusions. When we reanalyzed their mitochondrial data and attempted to account for problems with calibration 5 and 6, modeling rates across branches 5 and 7 and substitution saturation [5], we inferred a much younger date for their key node. This implies either a later trans-Atlantic dispersal of these crustaceans, or more likely a series of later invasions of freshwaters from a common marine ancestor, but either way probably not ancient tectonic plate movements.
Resumo:
Multiple sclerosis (MS) is a serious neurological disorder affecting young Caucasian individuals, usually with an age of onset at 18 to 40 years old. Females account for approximately 60x of MS cases and the manifestation and course of the disease is highly variable from patient to patient. The disorder is characterised by the development of plaques within the central nervous system (CNS). Many gene expression studies have been undertaken to look at the specific patterns of gene transcript levels in MS. Human tissues and experimental mice were used in these gene-profiling studies and a very valuable and interesting set of data has resulted from these various expression studies. In general, genes showing variable expression include mainly immunological and inflammatory genes, stress and antioxidant genes, as well as metabolic and central nervous system markers. Of particular interest are a number of genes localised to susceptible loci previously shown to be in linkage with MS. However due to the clinical complexity of the disease, the heterogeneity of the tissues used in expression studies, as well as the variable DNA chips/membranes used for the gene profiling, it is difficult to interpret the available information. Although this information is essential for the understanding of the pathogenesis of MS, it is difficult to decipher and define the gene pathways involved in the disorder. Experiments in gene expression profiling in MS have been numerous and lists of candidates are now available for analysis. Researchers have investigated gene expression in peripheral mononuclear white blood cells (PBMCs), in MS animal models Experimental Allergic Encephalomyelitis (EAE) and post mortem MS brain tissues. This review will focus on the results of these studies.
Resumo:
Migraine is a complex familial condition that imparts a significant burden on society. There is evidence for a role of genetic factors in migraine, and elucidating the genetic basis of this disabling condition remains the focus of much research. In this review we discuss results of genetic studies to date, from the discovery of the role of neural ion channel gene mutations in familial hemiplegic migraine (FHM) to linkage analyses and candidate gene studies in the more common forms of migraine. The success of FHM regarding discovery of genetic defects associated with the disorder remains elusive in common migraine, and causative genes have not yet been identified. Thus we suggest additional approaches for analysing the genetic basis of this disorder. The continuing search for migraine genes may aid in a greater understanding of the mechanisms that underlie the disorder and potentially lead to significant diagnostic and therapeutic applications.
Resumo:
Bactrocera dorsalis sensu stricto, B. papayae, B. philippinensis and B. carambolae are serious pest fruit fly species of the B. dorsalis complex that predominantly occur in south-east Asia and the Pacific. Identifying molecular diagnostics has proven problematic for these four taxa, a situation that cofounds biosecurity and quarantine efforts and which may be the result of at least some of these taxa representing the same biological species. We therefore conducted a phylogenetic study of these four species (and closely related outgroup taxa) based on the individuals collected from a wide geographic range; sequencing six loci (cox1, nad4-3′, CAD, period, ITS1, ITS2) for approximately 20 individuals from each of 16 sample sites. Data were analysed within maximum likelihood and Bayesian phylogenetic frameworks for individual loci and concatenated data sets for which we applied multiple monophyly and species delimitation tests. Species monophyly was measured by clade support, posterior probability or bootstrap resampling for Bayesian and likelihood analyses respectively, Rosenberg's reciprocal monophyly measure, P(AB), Rodrigo's (P(RD)) and the genealogical sorting index, gsi. We specifically tested whether there was phylogenetic support for the four 'ingroup' pest species using a data set of multiple individuals sampled from a number of populations. Based on our combined data set, Bactrocera carambolae emerges as a distinct monophyletic clade, whereas B. dorsalis s.s., B. papayae and B. philippinensis are unresolved. These data add to the growing body of evidence that B. dorsalis s.s., B. papayae and B. philippinensis are the same biological species, which poses consequences for quarantine, trade and pest management.
Resumo:
In South and Southeast Asia, postharvest loss causes material waste of up to 66% in fruits and vegetables, 30% in oilseeds and pulses, and 49% in roots and tubers. The efficiency of postharvest equipment directly affects industrial-scale food production. To enhance current processing methods and devices, it is essential to analyze the responses of food materials under loading operations. Food materials undergo different types of mechanical loading during postharvest and processing stages. Therefore, it is important to determine the properties of these materials under different types of loads, such as tensile, compression, and indentation. This study presents a comprehensive analysis of the available literature on the tensile properties of different food samples. The aim of this review was to categorize the available methods of tensile testing for agricultural crops and food materials to investigate an appropriate sample size and tensile test method. The results were then applied to perform tensile tests on pumpkin flesh and peel samples, in particular on arc-sided samples at a constant loading rate of 20 mm min-1. The results showed the maximum tensile stress of pumpkin flesh and peel samples to be 0.535 and 1.45 MPa, respectively. The elastic modulus of the flesh and peel samples was 6.82 and 25.2 MPa, respectively, while the failure modulus values were 14.51 and 30.88 MPa, respectively. The results of the tensile tests were also used to develop a finite element model of mechanical peeling of tough-skinned vegetables. However, to study the effects of deformation rate, moisture content, and texture of the tissue on the tensile responses of food materials, more investigation needs to be done in the future.
Resumo:
The overall aim of our research was to characterize airborne particles from selected nanotechnology processes and to utilize the data to develop and test quantitative particle concentration-based criteria that can be used to trigger an assessment of particle emission controls. We investigated particle number concentration (PNC), particle mass (PM) concentration, count median diameter (CMD), alveolar deposited surface area, elemental composition, and morphology from sampling of aerosols arising from six nanotechnology processes. These included fibrous and non-fibrous particles, including carbon nanotubes (CNTs). We adopted standard occupational hygiene principles in relation to controlling peak emission and exposures, as outlined by both Safe Work Australia, (1) and the American Conference of Governmental Industrial Hygienists (ACGIH®). (2) The results from the study were used to analyses peak and 30-minute averaged particle number and mass concentration values measured during the operation of the nanotechnology processes. Analysis of peak (highest value recorded) and 30-minute averaged particle number and mass concentration values revealed: Peak PNC20–1000 nm emitted from the nanotechnology processes were up to three orders of magnitude greater than the local background particle concentration (LBPC). Peak PNC300–3000 nm was up to an order of magnitude greater, and PM2.5 concentrations up to four orders of magnitude greater. For three of these nanotechnology processes, the 30-minute average particle number and mass concentrations were also significantly different from the LBPC (p-value < 0.001). We propose emission or exposure controls may need to be implemented or modified, or further assessment of the controls be undertaken, if concentrations exceed three times the LBPC, which is also used as the local particle reference value, for more than a total of 30 minutes during a workday, and/or if a single short-term measurement exceeds five times the local particle reference value. The use of these quantitative criteria, which we are terming the universal excursion guidance criteria, will account for the typical variation in LBPC and inaccuracy of instruments, while precautionary enough to highlight peaks in particle concentration likely to be associated with particle emission from the nanotechnology process. Recommendations on when to utilize local excursion guidance criteria are also provided.
Resumo:
AIM: To document and compare current practice in nutrition assessment of Parkinson’s disease by dietitians in Australia and Canada in order to identify priority areas for review and development of practice guidelines and direct future research. METHODS: An online survey was distributed to DAA members and PEN subscribers through their email newsletters. The survey captured current practice in the phases of the Nutrition Care Plan. The results of the assessment phase are presented here. RESULTS: Eighty-four dietitians responded. Differences in practice existed in the choice of nutrition screening and assessment tools, including appropriate BMI ranges. Nutrition impact symptoms were commonly assessed, but information about Parkinson’s disease medication interactions were not consistently assessed. CONCLUSIONS: he variation in practice related to the use of screening and assessment methods may result in the identification of different goals for subsequent interventions. Even more practice variation was evident for those items more specific to Parkinson’s disease and may be due to the lack of evidence to guide practice. Further research is required to support decisions for nutrition assessment of Parkinson’s disease.
Resumo:
Skin cancer is one of the most commonly occurring cancer types, with substantial social, physical, and financial burdens on both individuals and societies. Although the role of UV light in initiating skin cancer development has been well characterized, genetic studies continue to show that predisposing factors can influence an individual's susceptibility to skin cancer and response to treatment. In the future, it is hoped that genetic profiles, comprising a number of genetic markers collectively involved in skin cancer susceptibility and response to treatment or prognosis, will aid in more accurately informing practitioners' choices of treatment. Individualized treatment based on these profiles has the potential to increase the efficacy of treatments, saving both time and money for the patient by avoiding the need for extensive or repeated treatment. Increased treatment responses may in turn prevent recurrence of skin cancers, reducing the burden of this disease on society. Currently existing pharmacogenomic tests, such as those that assess variation in the metabolism of the anticancer drug fluorouracil, have the potential to reduce the toxic effects of anti-tumor drugs used in the treatment of non-melanoma skin cancer (NMSC) by determining individualized appropriate dosage. If the savings generated by reducing adverse events negate the costs of developing these tests, pharmacogenomic testing may increasingly inform personalized NMSC treatment.
Resumo:
Lean strategies have been developed to eliminate or reduce manufacturing waste and thus improve operational efficiency in manufacturing processes. However, implementing lean strategies requires a large amount of resources and, in practice, manufacturers encounter difficulties in selecting appropriate lean strategies within their resource constraints. There is currently no systematic methodology available for selecting appropriate lean strategies within a manufacturer's resource constraints. In the lean transformation process, it is also critical to measure the current and desired leanness levels in order to clearly evaluate lean implementation efforts. Despite the fact that many lean strategies are utilized to reduce or eliminate manufacturing waste, little effort has been directed towards properly assessing the leanness of manufacturing organizations. In practice, a single or specific group of metrics (either qualitative or quantitative) will only partially measure the overall leanness. Existing leanness assessment methodologies do not offer a comprehensive evaluation method, integrating both quantitative and qualitative lean measures into a single quantitative value for measuring the overall leanness of an organization. This research aims to develop mathematical models and a systematic methodology for selecting appropriate lean strategies and evaluating the leanness levels in manufacturing organizations. Mathematical models were formulated and a methodology was developed for selecting appropriate lean strategies within manufacturers' limited amount of available resources to reduce their identified wastes. A leanness assessment model was developed by using the fuzzy concept to assess the leanness level and to recommend an optimum leanness value for a manufacturing organization. In the proposed leanness assessment model, both quantitative and qualitative input factors have been taken into account. Based on program developed in MATLAB and C#, a decision support tool (DST) was developed for decision makers to select lean strategies and evaluate the leanness value based on the proposed models and methodology hence sustain the lean implementation efforts. A case study was conducted to demonstrate the effectiveness of these proposed models and methodology. Case study results suggested that out of 10 wastes identified, the case organization (ABC Limited) is able to improve a maximum of six wastes from the selected workstation within their resource limitations. The selected wastes are: unnecessary motion, setup time, unnecessary transportation, inappropriate processing, work in process and raw material inventory and suggested lean strategies are: 5S, Just-In-Time, Kanban System, the Visual Management System (VMS), Cellular Manufacturing, Standard Work Process using method-time measurement (MTM), and Single Minute Exchange of Die (SMED). From the suggested lean strategies, the impact of 5S was demonstrated by measuring the leanness level of two different situations in ABC. After that, MTM was suggested as a standard work process for further improvement of the current leanness value. The initial status of the organization showed a leanness value of 0.12. By applying 5S, the leanness level significantly improved to reach 0.19 and the simulation of MTM as a standard work method shows the leanness value could be improved to 0.31. The optimum leanness value of ABC was calculated to be 0.64. These leanness values provided a quantitative indication of the impacts of improvement initiatives in terms of the overall leanness level to the case organization. Sensitivity analsysis and a t-test were also performed to validate the model proposed. This research advances the current knowledge base by developing mathematical models and methodologies to overcome lean strategy selection and leanness assessment problems. By selecting appropriate lean strategies, a manufacturer can better prioritize implementation efforts and resources to maximize the benefits of implementing lean strategies in their organization. The leanness index is used to evaluate an organization's current (before lean implementation) leanness state against the state after lean implementation and to establish benchmarking (the optimum leanness state). Hence, this research provides a continuous improvement tool for a lean manufacturing organization.
Resumo:
Purpose Intensity modulated radiotherapy (IMRT) treatments require more beam-on time and produce more linac head leakage to deliver similar doses to conventional, unmodulated, radiotherapy treatments. It is necessary to take this increased leakage into account when evaluating the results of radiation surveys around bunkers that are, or will be, used for IMRT. The recommended procedure of 15 applying a monitor-unit based workload correction factor to secondary barrier survey measurements, to account for this increased leakage when evaluating radiation survey measurements around IMRT bunkers, can lead to potentially-costly over estimation of the required barrier thickness. This study aims to provide initial guidance on the validity of reducing the value of the correction factor when applied to different radiation barriers (primary barriers, doors, maze walls and other walls) by 20 evaluating three different bunker designs. Methods Radiation survey measurements of primary, scattered and leakage radiation were obtained at each of five survey points around each of three different radiotherapy bunkers and the contribution of leakage to the total measured radiation dose at each point was evaluated. Measurements at each survey point were made with the linac gantry set to 12 equidistant positions from 0 to 330o, to 25 assess the effects of radiation beam direction on the results. Results For all three bunker designs, less than 0.5% of dose measured at and alongside the primary barriers, less than 25% of the dose measured outside the bunker doors and up to 100% of the dose measured outside other secondary barriers was found to be caused by linac head leakage. Conclusions Results of this study suggest that IMRT workload corrections are unnecessary, for 30 survey measurements made at and alongside primary barriers. Use of reduced IMRT workload correction factors is recommended when evaluating survey measurements around a bunker door, provided that a subset of the measurements used in this study are repeated for the bunker in question. Reduction of the correction factor for other secondary barrier survey measurements is not recommended unless the contribution from leakage is separetely evaluated.
Resumo:
The current state of knowledge in relation to first flush does not provide a clear understanding of the role of rainfall and catchment characteristics in influencing this phenomenon. This is attributed to the inconsistent findings from research studies due to the unsatisfactory selection of first flush indicators and how first flush is defined. The research study discussed in this thesis provides the outcomes of a comprehensive analysis on the influence of rainfall and catchment characteristics on first flush behaviour in residential catchments. Two sets of first flush indicators are introduced in this study. These indicators were selected such that they are representative in explaining in a systematic manner the characteristics associated with first flush. Stormwater samples and rainfall-runoff data were collected and recorded from stormwater monitoring stations established at three urban catchments at Coomera Waters, Gold Coast, Australia. In addition, historical data were also used to support the data analysis. Three water quality parameters were analysed, namely, total suspended solids (TSS), total phosphorus (TP) and total nitrogen (TN). The data analyses were primarily undertaken using multi criteria decision making methods, PROMETHEE and GAIA. Based on the data obtained, the pollutant load distribution curve (LV) was determined for the individual rainfall events and pollutant types. Accordingly, two sets of first flush indicators were derived from the curve, namely, cumulative load wash-off for every 10% of runoff volume interval (interval first flush indicators or LV) from the beginning of the event and the actual pollutant load wash-off during a 10% increment in runoff volume (section first flush indicators or P). First flush behaviour showed significant variation with pollutant types. TSS and TP showed consistent first flush behaviour. However, the dissolved fraction of TN showed significant differences to TSS and TP first flush while particulate TN showed similarities. Wash-off of TSS, TP and particulate TN during the first 10% of the runoff volume showed no influence from corresponding rainfall intensity. This was attributed to the wash-off of weakly adhered solids on the catchment surface referred to as "short term pollutants" or "weakly adhered solids" load. However, wash-off after 10% of the runoff volume showed dependency on the rainfall intensity. This is attributed to the wash-off of strongly adhered solids being exposed when the weakly adhered solids diminish. The wash-off process was also found to depend on rainfall depth at the end part of the event as the strongly adhered solids are loosened due to impact of rainfall in the earlier part of the event. Events with high intensity rainfall bursts after 70% of the runoff volume did not demonstrate first flush behaviour. This suggests that rainfall pattern plays a critical role in the occurrence of first flush. Rainfall intensity (with respect to the rest of the event) that produces 10% to 20% runoff volume play an important role in defining the magnitude of the first flush. Events can demonstrate high magnitude first flush when the rainfall intensity occurring between 10% and 20% of the runoff volume is comparatively high while low rainfall intensities during this period produces low magnitude first flush. For events with first flush, the phenomenon is clearly visible up to 40% of the runoff volume. This contradicts the common definition that first flush only exists, if for example, 80% of the pollutant mass is transported in the first 30% of runoff volume. First flush behaviour for TN is different compared to TSS and TP. Apart from rainfall characteristics, the composition and the availability of TN on the catchment also play an important role in first flush. The analysis confirmed that events with low rainfall intensity can produce high magnitude first flush for the dissolved fraction of TN, while high rainfall intensity produce low dissolved TN first flush. This is attributed to the source limiting behaviour of dissolved TN wash-off where there is high wash-off during the initial part of a rainfall event irrespective of the intensity. However, for particulate TN, the influence of rainfall intensity on first flush characteristics is similar to TSS and TP. The data analysis also confirmed that first flush can occur as high magnitude first flush, low magnitude first flush or non existence of first flush. Investigation of the influence of catchment characteristics on first flush found that the key factors that influence the phenomenon are the location of the pollutant source, spatial distribution of the pervious and impervious surfaces in the catchment, drainage network layout and slope of the catchment. This confirms that first flush phenomenon cannot be evaluated based on a single or a limited set of parameters as a number of catchment characteristics should be taken into account. Catchments where the pollutant source is located close to the outlet, a high fraction of road surfaces, short travel time to the outlet, with steep slopes can produce high wash-off load during the first 50% of the runoff volume. Rainfall characteristics have a comparatively dominant impact on the wash-off process compared to the catchment characteristics. In addition, the pollutant characteristics also should be taken into account in designing stormwater treatment systems due to different wash-off behaviour. Analysis outcomes confirmed that there is a high TSS load during the first 20% of the runoff volume followed by TN which can extend up to 30% of the runoff volume. In contrast, high TP load can exist during the initial and at the end part of a rainfall event. This is related to the composition of TP available for the wash-off.
Resumo:
The thesis is a comparative study of ICTs and Internet use of Australian and Malaysian early childhood teachers in terms of their personal and professional comfort with ICTs, pedagogical beliefs, and their reported classroom practice. The study discovered teachers from both countries as relatively comfortable with digital technologies and the Internet, with most teachers held positive beliefs about ICT usage. The structural barriers in classrooms include lack of Internet access and the wide gap that exists between teachers’ positive beliefs and classroom practice. The study suggests the need for strategic and targeted professional development for teachers.