945 resultados para Few
Resumo:
Over recent years there has been an increase in the literature examining youth with Autism Spectrum Disorders (ASD). The growth in this area of research has highlighted a significant gap in our understanding of suitable interventions for people with ASD and the treatment of co-occurring psychiatric disorders.1-3 Children with ASD are at increased risk of experiencing depressive symptoms and developing depression; however with very few proven interventions available for preventing and treating depression in children with ASD, there is a need for further research in this area.
Resumo:
There is growing and converging evidence that cannabis may be a major risk factor in people with psychotic disorders and prodromal psychotic symptoms. The lack of available pharmacological treatments for cannabis use indicates that psychological interventions should be a high priority, especially among people with psychotic disorders. However, there have been few randomised controlled trials (RCTs) of psychological interventions among this group. In the present study we critically overview RCTs of psychological and pharmacologic interventions among people with psychotic disorders, giving particular attention to those studies which report cannabis use outcomes. We then review data regarding treatment preferences among this group. RCTs of interventions within "real world" mental health systems among adults with severe mental disorders suggest that cannabis use is amenable to treatment in real world settings among people with psychotic disorders. RCTs of manual guided interventions among cannabis users indicate that while brief interventions are associated with reductions in cannabis use, longer interventions may be more effective. Additionally, RCTs reviewed suggest treatment with antipsychotic medication is not associated with a worsening of cannabis cravings or use and may be beneficial. The development of cannabinoid agonist medication may be an effective strategy for cannabis dependence and suitable for people with psychotic disorders. The development of cannabis use interventions for people with psychotic disorders should also consider patients' treatment preferences. Initial results indicate face-to-face interventions focussed on cannabis use may be preferred. Further research investigating the treatment preferences of people with psychotic disorders using cannabis is needed.
Resumo:
Background: Understanding the spatial distribution of suicide can inform the planning, implementation and evaluation of suicide prevention activity. This study explored spatial clusters of suicide in Australia, and investigated likely socio-demographic determinants of these clusters. Methods: National suicide and population data at a statistical local area (SLA) level were obtained from the Australian Bureau of Statistics for the period of 1999 to 2003. Standardised mortality ratios (SMR) were calculated at the SLA level, and Geographic Information System (GIS) techniques were applied to investigate the geographical distribution of suicides and detect clusters of high risk in Australia. Results: Male suicide incidence was relatively high in the northeast of Australia, and parts of the east coast, central and southeast inland, compared with the national average. Among the total male population and males aged 15 to 34, Mornington Shire had the whole or a part of primary high risk cluster for suicide, followed by the Bathurst-Melville area, one of the secondary clusters in the north coastal area of the Northern Territory. Other secondary clusters changed with the selection of cluster radius and age group. For males aged 35 to 54 years, only one cluster in the east of the country was identified. There was only one significant female suicide cluster near Melbourne while other SLAs had very few female suicide cases and were not identified as clusters. Male suicide clusters had a higher proportion of Indigenous population and lower median socio-economic index for area (SEIFA) than the national average, but their shapes changed with selection of maximum cluster radii setting. Conclusion: This study found high suicide risk clusters at the SLA level in Australia, which appeared to be associated with lower median socio-economic status and higher proportion of Indigenous population. Future suicide prevention programs should focus on these high risk areas.
Resumo:
Brief self-report symptom checklists are often used to screen for postconcussional disorder (PCD) and posttraumatic stress disorder (PTSD) and are highly susceptible to symptom exaggeration. This study examined the utility of the five-item Mild Brain Injury Atypical Symptoms Scale (mBIAS) designed for use with the Neurobehavioral Symptom Inventory (NSI) and the PTSD Checklist–Civilian (PCL–C). Participants were 85 Australian undergraduate students who completed a battery of self-report measures under one of three experimental conditions: control (i.e., honest responding, n = 24), feign PCD (n = 29), and feign PTSD (n = 32). Measures were the mBIAS, NSI, PCL–C, Minnesota Multiphasic Personality Inventory–2, Restructured Form (MMPI–2–RF), and the Structured Inventory of Malingered Symptomatology (SIMS). Participants instructed to feign PTSD and PCD had significantly higher scores on the mBIAS, NSI, PCL–C, and MMPI–2–RF than did controls. Few differences were found between the feign PCD and feign PTSD groups, with the exception of scores on the NSI (feign PCD > feign PTSD) and PCL–C (feign PTSD > feign PCD). Optimal cutoff scores on the mBIAS of ≥8 and ≥6 were found to reflect “probable exaggeration” (sensitivity = .34; specificity = 1.0; positive predictive power, PPP = 1.0; negative predictive power, NPP = .74) and “possible exaggeration” (sensitivity = .72; specificity = .88; PPP = .76; NPP = .85), respectively. Findings provide preliminary support for the use of the mBIAS as a tool to detect symptom exaggeration when administering the NSI and PCL–C.
Resumo:
A core component for the prevention of re-occurring incidents within the rail industry is rail safety investigations. Within the current Australasian rail industry, the nature of incident investigations varies considerably between organisations. As it stands, most of the investigations are conducted by the various State Rail Operators and Regulators, with the more major investigations in Australia being conducted or overseen by the Australian Transport Safety Bureau (ATSB). Because of the varying nature of these investigations, the current training methods for rail incident investigators also vary widely. While there are several commonly accepted training courses available to investigators in Australasia, none appear to offer the breadth of development needed for a comprehensive pathway. Furthermore, it appears that no single training course covers the entire breadth of competencies required by the industry. These courses range in duration between a few days to several years, and some were run in-house while others are run by external consultants or registered training organisations. Through consultations with rail operators and regulators in Australasia, this paper will identify capabilities required for rail incident investigation and explore the current training options available for rail incident investigators.
Resumo:
Control of biospecimen quality that is linked to processing is one of the goals of biospecimen science. Consensus is lacking, however, regarding optimal sample quality-control (QC) tools (ie, markers and assays). The aim of this review was to identify QC tools, both for fluid and solid-tissue samples, based on a comprehensive and critical literature review. The most readily applicable tools are those with a known threshold for the preanalytical variation and a known reference range for the QC analyte. Only a few meaningful markers were identified that meet these criteria, such as CD40L for assessing serum exposure at high temperatures and VEGF for assessing serum freeze-thawing. To fully assess biospecimen quality, multiple QC markers are needed. Here we present the most promising biospecimen QC tools that were identified.
Resumo:
Recent decades have witnessed a global acceleration of legislative and private sector initiatives to deal with Cross-Border insolvency. Legislative institutions include the various national implementations of the Model Law on Cross-Border Insolvency (Model Law) published by the United Nations Commission on International Trade (UNCITRAL).3 Private mechanisms include Cross-Border protocols developed and utilised by insolvency professionals and their advisers (often with the imprimatur of the judiciary), on both general and ad hoc bases. The Asia Pacific region has not escaped the effect of those developments, and the economic turmoil of the past few years has provided an early test for some of the emerging initiatives in that region. This two-part article explores the operation of those institutions through the medium of three recent cases.
Resumo:
Recent decades have witnessed a global acceleration of legislative and private sector initiatives to deal with Cross-Border insolvency. Legislative institutions include the various national implementations of the Model Law on Cross-Border Insolvency (Model Law) published by the United Nations Commission on International Trade (UNCITRAL).3 Private mechanisms include Cross-Border protocols developed and utilised by insolvency professionals and their advisers (often with the imprimatur of the judiciary), on both general and ad hoc bases. The Asia Pacific region has not escaped the effect of those developments, and the economic turmoil of the past few years has provided an early test for some of the emerging initiatives in that region. This two-part article explores the operation of those institutions through the medium of three recent cases.
Resumo:
In Australia, few fashion brands have intervened in the design of their products or the systems around their product to tackle environmental pollution and waste. Instead, support of charities (whether social or environmental) has become conflated with sustainability in the eyes of the public.However, three established Australian brands recently put forward initiatives which explicitly tackle the pre-consumer or post-consumer waste associated with their products. In 2011, Billabong, one of the largest surfwear companies in the world, developed a collection of board shorts made from recycled bottles that are also recyclable at end of life. The initiative has been promoted in partnership with Bob Marley’s son Rohan Marley, and the graphics of the board shorts reference the Rastafarian colours and make use of Marley’s song lyrics. In this way, the company has tapped into an aspect of surf culture linked to environmental activism, in which the natural world is venerated. Two mid-market initiatives, by Metalicus and Country Road, each have a social outcome that arguably aligns to the values of their middle-class consumer base. Metalicus is spear-heading a campaign for Australian garment manufacturers to donate their pre consumer waste – fabric off-cuts – to charity Open Family Australia to be manufactured into quilts for the homeless. Country Road has partnered with the Australian Red Cross to implement a recycling scheme in which consumers donate their old Country Road garments in exchange for a Country Road gift voucher. Both strategies, while tackling waste, tell an altruistic story in which the disadvantaged can benefit from the consumption habits of the middle-class. To varying degrees, the initiative chosen by each company feeds into the stories they tell about themselves and about the consumers who purchase their clothing. However, how can we assess the impact of these schemes on waste management in real terms, or indeed the worth of each scheme in the wider context of the fashion system? This paper will assess the claims made by the companies and analyse their efficacy, suggesting that a more nuanced assessment of green claims is required, in which ‘green’ comes in many tonal variations.
Resumo:
This case-study exemplifies a ‘writing movement’, which is currently occurring in various parts of Australia through the support of social media. A concept emerging from the café scene in San Francisco, ‘Shut Up and Write!’ is a meetup group that brings writers together at a specific time and place to write side by side, thus making writing practice, social. This concept has been applied to the academic environment and our case-study explores the positive outcomes in two locations: RMIT University and QUT. We believe that this informal learning practice can be implemented to assist research students in developing academic skills. DESCRIPTION: Please describe your practice as a case study, including its context; challenge addressed; its aims; what it is; and how it supports creative practice PhD students or supervisors. Additional information may include: the outcomes; key factors or principles that contribute to its effectiveness; anticipated impact/evidence of impact. Research students spend the majority of their time outside of formal learning environments. Doctoral candidates enter their degree with a range of experience, knowledge and needs, making it difficult to provide writing assistance in a structured manner. Using a less structured approach to provide writing assistance has been trialled with promising results (Boud, Cohen, & Sampson, 2001; Stracke, 2010; Devenish et al, 2009). Although, semi structured approaches have been developed and examined, informal learning opportunities have received minimal attention. The primary difference of Shut Up and Write! to other writing practices, is that individuals do not engage in any structured activity and they do not share the outcomes of the writing. The purpose of Shut Up and Write! is to transform writing practice from a solitary experience, to a social one. Shut Up and Write! typically takes place outside of formal learning environments, in public spaces such as a café. The structure of Shut Up and Write! sessions is simple: participants meet at a specific time and place, chat for a few minutes, then they Shut Up and Write for a predetermined amount of time. Critical to the success of the sessions, is that there is no critiquing of the writing, and there is no competition or formal exercises. Our case-study examines the experience of two meetup groups at RMIT University and QUT through narrative accounts from participants. These accounts reveal that participants have learned: • Writing/productivity techniques; • Social/cloud software; • Aspects of the PhD; and • ‘Mundane’ dimensions of academic practice. In addition to this, activities such as Shut Up and Write! promote peer to peer bonding, knowledge exchange, and informal learning within the higher degree research experience. This case-study extends the initial work presented by the authors in collaboration with Dr. Inger Mewburn at QPR2012 – Quality in Postgraduate Research Conference, 2012.
Resumo:
In the face of Australia’s disaster-prone environment, architects Ian Weir and James Davidson are reconceptualising how our residential buildings might become more resilient to fire, flood and cyclone. With their first-hand experience of natural disasters, James, director of Emergency Architects Australia (EAA), and Ian, one of Australia’s few ‘bushfire architects’, discuss the ways we can design with disaster in mind. Dr Ian Weir is one of Australia’s few ‘bushfire architects’. Exploring a holistic ‘ground up’ approach to bushfire where landscape, building design and habitation patterns are orchestrated to respond to site-specific fire characteristics. Ian’s research is developed through design studio teaching at QUT and through built works in Western Australia’s fire prone forests and heathlands.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
Advances in algorithms for approximate sampling from a multivariable target function have led to solutions to challenging statistical inference problems that would otherwise not be considered by the applied scientist. Such sampling algorithms are particularly relevant to Bayesian statistics, since the target function is the posterior distribution of the unobservables given the observables. In this thesis we develop, adapt and apply Bayesian algorithms, whilst addressing substantive applied problems in biology and medicine as well as other applications. For an increasing number of high-impact research problems, the primary models of interest are often sufficiently complex that the likelihood function is computationally intractable. Rather than discard these models in favour of inferior alternatives, a class of Bayesian "likelihoodfree" techniques (often termed approximate Bayesian computation (ABC)) has emerged in the last few years, which avoids direct likelihood computation through repeated sampling of data from the model and comparing observed and simulated summary statistics. In Part I of this thesis we utilise sequential Monte Carlo (SMC) methodology to develop new algorithms for ABC that are more efficient in terms of the number of model simulations required and are almost black-box since very little algorithmic tuning is required. In addition, we address the issue of deriving appropriate summary statistics to use within ABC via a goodness-of-fit statistic and indirect inference. Another important problem in statistics is the design of experiments. That is, how one should select the values of the controllable variables in order to achieve some design goal. The presences of parameter and/or model uncertainty are computational obstacles when designing experiments but can lead to inefficient designs if not accounted for correctly. The Bayesian framework accommodates such uncertainties in a coherent way. If the amount of uncertainty is substantial, it can be of interest to perform adaptive designs in order to accrue information to make better decisions about future design points. This is of particular interest if the data can be collected sequentially. In a sense, the current posterior distribution becomes the new prior distribution for the next design decision. Part II of this thesis creates new algorithms for Bayesian sequential design to accommodate parameter and model uncertainty using SMC. The algorithms are substantially faster than previous approaches allowing the simulation properties of various design utilities to be investigated in a more timely manner. Furthermore the approach offers convenient estimation of Bayesian utilities and other quantities that are particularly relevant in the presence of model uncertainty. Finally, Part III of this thesis tackles a substantive medical problem. A neurological disorder known as motor neuron disease (MND) progressively causes motor neurons to no longer have the ability to innervate the muscle fibres, causing the muscles to eventually waste away. When this occurs the motor unit effectively ‘dies’. There is no cure for MND, and fatality often results from a lack of muscle strength to breathe. The prognosis for many forms of MND (particularly amyotrophic lateral sclerosis (ALS)) is particularly poor, with patients usually only surviving a small number of years after the initial onset of disease. Measuring the progress of diseases of the motor units, such as ALS, is a challenge for clinical neurologists. Motor unit number estimation (MUNE) is an attempt to directly assess underlying motor unit loss rather than indirect techniques such as muscle strength assessment, which generally is unable to detect progressions due to the body’s natural attempts at compensation. Part III of this thesis builds upon a previous Bayesian technique, which develops a sophisticated statistical model that takes into account physiological information about motor unit activation and various sources of uncertainties. More specifically, we develop a more reliable MUNE method by applying marginalisation over latent variables in order to improve the performance of a previously developed reversible jump Markov chain Monte Carlo sampler. We make other subtle changes to the model and algorithm to improve the robustness of the approach.
Resumo:
ZnO is a wide band-gap semiconductor that has several desirable properties for optoelectronic devices. With its large exciton binding energy of ~60 meV, ZnO is a promising candidate for high stability, room-temperature luminescent and lasing devices [1]. Ultraviolet light-emitting diodes (LEDs) based on ZnO homojunctions had been reported [2,3], while preparing stable p-type ZnO is still a challenge. An alternative way is to use other p-type semiconductors, ether inorganic or organic, to form heterojunctions with the naturally n-type ZnO. The crystal structure of wurtzite ZnO can be described as Zn and O atomic layers alternately stacked along the [0001] direction. Because of the fastest growth rate over the polar (0001) facet, ZnO crystals tend to grow into one-dimensional structures, such as nanowires and nanobelts. Since the first report of ZnO nanobelts in 2001 [4], ZnO nanostructures have been particularly studied for their potential applications in nano-sized devices. Various growth methods have been developed for growing ZnO nanostructures, such as chemical vapor deposition (CVD), Metal-organic CVD (MOCVD), aqueous growth and electrodeposition [5]. Based on the successful synthesis of ZnO nanowires/nanorods, various types of hybrid light-emitting diodes (LEDs) were made. Inorganic p-type semiconductors, such as GaN, Si and SiC, have been used as substrates to grown ZnO nanorods/nanowires for making LEDs. GaN is an ideal material that matches ZnO not only in the crystal structure but also in the energy band levels. However, to prepare Mg-doped p-GaN films via epitaxial growth is still costly. In comparison, the organic semiconductors are inexpensive and have many options to select, for a large variety of p-type polymer or small-molecule semiconductors are now commercially available. The organic semiconductor has the limitation of durability and environmental stability. Many polymer semiconductors are susceptible to damage by humidity or mere exposure to oxygen in the air. Also the carrier mobilities of polymer semiconductors are generally lower than the inorganic semiconductors. However, the combination of polymer semiconductors and ZnO nanostructures opens the way for making flexible LEDs. There are few reports on the hybrid LEDs based on ZnO/polymer heterojunctions, some of them showed the characteristic UV electroluminescence (EL) of ZnO. This chapter reports recent progress of the hybrid LEDs based on ZnO nanowires and other inorganic/organic semiconductors. We provide an overview of the ZnO-nanowire-based hybrid LEDs from the perspectives of the device configuration, growth methods of ZnO nanowires and the selection of p-type semiconductors. Also the device performances and remaining issues are presented.
Resumo:
All elections are unique, but the Australian federal election of 2010 was unusual for many reasons. It came in the wake of the unprecedented ousting of the Prime Minister who had led the Australian Labor Party to a landslide victory, after eleven years in opposition, at the previous election in 2007. In a move that to many would have been unthinkable, Kevin Rudd’s increasing unpopularity within his own parliamentary party finally took its toll and in late June he was replaced by his deputy, Julia Gillard. Thus the second unusual feature of the election was that it was contested by Australia’s first female prime minister. The third unusual feature was that the election almost saw a first-term government, with a comfortable majority, defeated. Instead it resulted in a hung parliament, for the first time since 1940, and Labor scraped back into power as a minority government, supported by three independents and the first member of the Australian Greens ever to be elected to the House of Representatives. The Coalition Liberal and National opposition parties themselves had a leader of only eight months standing, Tony Abbott, whose ascension to the position had surprised more than a few. This was the context for an investigation of voting behaviour in the 2010 election....