531 resultados para Adelaide Botanic Gardens
Resumo:
Introduction Clinical guidelines for the treatment of chronic low back pain suggest the use of supervised exercise. Motor control (MC) based exercise is widely used within clinical practice but its efficacy is equivalent to general exercise therapy. MC exercise targets the trunk musculature. Considering the mechanical links between the hip, pelvis, and lumbar spine, surprisingly little focus has been on investigating the contribution of the hip musculature to lumbopelvic support. The purpose of this study is to compare the efficacy of two exercise programs for the treatment of non-specific low back pain (NSLBP). Methods Eighty individuals aged 18-65 years of age were randomized into two groups to participate in this trial. The primary outcome measures included self-reported pain intensity (0-100mm VAS) and percent disability (Oswestry Disability Index V2). Bilateral measures of hip strength (N/kg) and two dimensional frontal plane mechanics (º) were the secondary outcomes. Outcomes were measured at baseline and following a six-week home based exercise program including weekly sessions of real-time ultrasound imaging. Results Within group comparisons revealed clinically meaningful reductions in pain for both groups. The MC exercise only (N= 40, xˉ =-20.9mm, 95%CI -25.7, -16.1) and the combined MC and hip exercise (N= 40, xˉ = -24.9mm, 95%CI -30.8, -19.0). There was no statistical difference in the change of pain (xˉ =-4.0mm, t= -1.07, p=0.29, 95%CI -11.5, 3.5) or disability (xˉ =-0.3%, t=-0.19, p=0.85, 95%CI -11.5, 3.5) between groups. Conclusion Both exercise programs had similar and positive effects on NSLBP which support the use of the home based exercise programs with weekly supervised visits. However, the addition of specific hip strengthening exercises to a MC based exercise program did not result in significantly greater reductions in pain or disability. Trial Registration NCTO1567566 Funding: Worker’s Compensation Board Alberta Research Grant.
Resumo:
A travel article about a tour of Samoa. I WAKE to the smell of smoke. Outside, a morning routine has begun - the burning of leaves that overnight settle in the backyards. Even in the capital, gardens are miniature farms. Chickens, bananas and coconuts. When the smoke haze lifts and the faint waves of a sea breeze come off Apia Harbour, the corrugated iron roofs shine through like headlights coming out of fog...
Resumo:
As urbanisation of the global population has increased above 50%, growing food in urban spaces increases in importance, as it can contribute to food security, reduce food miles, and improve people’s physical and mental health. Approaching the task of growing food in urban environments is a mixture of residential growers and groups. Permablitz Brisbane is an event-centric grassroots community that organises daylong ‘working bee’ events, drawing on permaculture design principles in the planning and design process. Permablitz Brisbane provides a useful contrast from other location-centric forms of urban agriculture communities (such as city farms or community gardens), as their aim is to help encourage urban residents to grow their own food. We present findings and design implications from a qualitative study with members of this group, using ethnographic methods to engage with and understand how this group operates. Our findings describe four themes that include opportunities, difficulties, and considerations for the creation of interventions by Human-Computer Interaction (HCI) designers.
Resumo:
Determination of sequence similarity is a central issue in computational biology, a problem addressed primarily through BLAST, an alignment based heuristic which has underpinned much of the analysis and annotation of the genomic era. Despite their success, alignment-based approaches scale poorly with increasing data set size, and are not robust under structural sequence rearrangements. Successive waves of innovation in sequencing technologies – so-called Next Generation Sequencing (NGS) approaches – have led to an explosion in data availability, challenging existing methods and motivating novel approaches to sequence representation and similarity scoring, including adaptation of existing methods from other domains such as information retrieval. In this work, we investigate locality-sensitive hashing of sequences through binary document signatures, applying the method to a bacterial protein classification task. Here, the goal is to predict the gene family to which a given query protein belongs. Experiments carried out on a pair of small but biologically realistic datasets (the full protein repertoires of families of Chlamydia and Staphylococcus aureus genomes respectively) show that a measure of similarity obtained by locality sensitive hashing gives highly accurate results while offering a number of avenues which will lead to substantial performance improvements over BLAST..
Resumo:
What is ‘best practice’ when it comes to managing intellectual property rights in participatory media content? As commercial media and entertainment business models have increasingly come to rely upon the networked productivity of end-users (Banks and Humphreys 2008) this question has been framed as a problem of creative labour made all the more precarious by changing employment patterns and work cultures of knowledge-intensive societies and globalising economies (Banks, Gill and Taylor 2014). This paper considers how the problems of ownership are addressed in non-commercial, community-based arts and media contexts. Problems of labour are also manifest in these contexts (for example, reliance on volunteer labour and uncertain economic reward for creative excellence). Nonetheless, managing intellectual property rights in collaborative creative works that are created in community media and arts contexts is no less challenging or complex than in commercial contexts. This paper takes as its focus a particular participatory media practice known as ‘digital storytelling’. The digital storytelling method, formalised by the Centre for Digital Storytelling (CDS) from the mid-1990s, has been internationally adopted and adapted for use in an open-ended variety of community arts, education, health and allied services settings (Hartley and McWilliam 2009; Lambert 2013; Lundby 2008; Thumin 2012). It provides a useful point of departure for thinking about a range of collaborative media production practices that seek to address participation ‘gaps’ (Jenkins 2006). However the outputs of these activities, including digital stories, cannot be fully understood or accurately described as user-generated content. For this reason, digital storytelling is taken here to belong to a category of participatory media activity that has been described as ‘co-creative’ media (Spurgeon 2013) in order to improve understanding of the conditions of mediated and mediatized participation (Couldry 2008). This paper reports on a survey of the actual copyrighting practices of cultural institutions and community-based media arts practitioners that work with digital storytelling and similar participatory content creation methods. This survey finds that although there is a preference for Creative Commons licensing a great variety of approaches are taken to managing intellectual property rights in co-creative media. These range from the use of Creative Commons licences (for example, Lambert 2013, p.193) to retention of full copyrights by storytellers, to retention of certain rights by facilitating organisations (for example, broadcast rights by community radio stations and public service broadcasters), and a range of other shared rights arrangements between professional creative practitioners, the individual storytellers and communities with which they collaborate, media outlets, exhibitors and funders. This paper also considers how aesthetic and ethical considerations shape responses to questions of intellectual property rights in community media arts contexts. For example, embedded in the CDS digital storytelling method is ‘a critique of power and the numerous ways that rank is unconsciously expressed in engagements between classes, races and gender’ (Lambert 117). The CDS method privileges the interests of the storyteller and, through a transformative workshop process, aims to generate original individual stories that, in turn, reflect self-awareness of ‘how much the way we live is scripted by history, by social and cultural norms, by our own unique journey through a contradictory, and at times hostile, world’ (Lambert 118). Such a critical approach is characteristic of co-creative media practices. It extends to a heightened awareness of the risks of ‘story theft’ and the challenges of ownership and informs ideas of ‘best practice’ amongst creative practitioners, teaching artists and community media producers, along with commitments to achieving equitable solutions for all participants in co-creative media practice (for example, Lyons-Reid and Kuddell nd.). Yet, there is surprisingly little written about the challenges of managing intellectual property produced in co-creative media activities. A dialogic sense of ownership in stories has been identified as an indicator of successful digital storytelling practice (Hayes and Matusov 2005) and is helpful to grounding the more abstract claims of empowerment for social participation that are associated with co-creative methods. Contrary to the ‘change from below’ philosophy that underpins much thinking about co-creative media, however, discussions of intellectual property usually focus on how methods such as digital storytelling contribute to the formation of copyright law-compliant subjects, particularly when used in educational settings (for example, Ohler nd.). This also exposes the reliance of co-creative methods on the creative assets storytellers (rather than on the copyrighted materials of the media cultures of storytellers) as a pragmatic response to the constraints that intellectual property right laws impose on the entire category of participatory media. At the level of practical politics, it also becomes apparent that co-creative media practitioners and storytellers located in copyright jurisdictions governed by ‘fair use’ principles have much greater creative flexibility than those located in jurisdictions governed by ‘fair dealing’ principles.
Resumo:
Background Chronic kidney disease is a global public health problem of increasing prevalence. There are five stages of kidney disease, with Stage 5 indicating end stage kidney disease (ESKD) requiring dialysis or death will eventually occur. Over the last two decades there have been increasing numbers of people commencing dialysis. A majority of this increase has occurred in the population of people who are 65 years and over. With the older population it is difficult to determine at times whether dialysis will provide any benefit over non-dialysis management. The poor prognosis for the population over 65 years raises issues around management of ESKD in this population. It is therefore important to review any research that has been undertaken in this area which compares outcomes of the older ESKD population who have commenced dialysis with those who have received non-dialysis management. Objective The primary objective was to assess the effect of dialysis compared with non-dialysis management for the population of 65 years and over with ESKD. Inclusion criteria Types of participants This review considered studies that included participants who were 65 years and older. These participants needed to have been diagnosed with ESKD for greater than three months and also be either receiving renal replacement therapy (RRT) (hemodialysis [HD] or peritoneal dialysis [PD]) or non-dialysis management. The settings for the studies included the home, self-care centre, satellite centre, hospital, hospice or nursing home. Types of intervention(s)/phenomena of interest This review considered studies where the intervention was RRT (HD or PD) for the participants with ESKD. There was no restriction on frequency of RRT or length of time the participant received RRT. The comparator was participants who were not undergoing RRT. Types of studies This review considered both experimental and epidemiological study designs including randomized controlled trials, non-randomized controlled trials, quasi-experimental, before and after studies, prospective and retrospective cohort studies, case control studies and analytical cross sectional studies. This review also considered descriptive epidemiological study designs including case series, individual case reports and descriptive cross sectional studies for inclusion. This review included any of the following primary and secondary outcome measures: •Primary outcome – survival measures •Secondary outcomes – functional performance score (e.g. Karnofsky Performance score) •Symptoms and severity of end stage kidney disease •Hospital admissions •Health related quality of life (e.g. KDQOL, SF36 and HRQOL) •Comorbidities (e.g. Charlson Comorbidity index).
Resumo:
* Local foods are growing in importance in the mind set of the consumer – “the new organic” (McKenzie-Minifie, 2007) * Consumers are becoming more active in choosing alternative channels to purchase locally grown/produced foods Growth of farmer’s markets, roadside stalls, community gardens and *CSA programs * Supermarkets and grocers continue to tailor their assortments to include, ethnic, organic, natural and local foods to meet changing consumer needs * Australian research is limited, although one early study has found ‘buying locally produced foods’ was considered an important attribute (Lea & Worsley, 2007) * International research has tended to focus on COO effects, rather than region or local effects. (Insch & Florek, 2009) *Emerging research is beginning to explore consumer interest in ‘local’ over simply ‘domestic’ – although not specifically in food. (Hustvedt, Carroll & Bernard, 2013) * One study has examined differences in attitudes, subjective norms and intentions toward the purchase of locally produced foods. (Campbell, 2013)
Resumo:
A new online method is presented for estimation of the angular random walk and rate random walk coefficients of IMU (inertial measurement unit) gyros and accelerometers. The online method proposes a state space model and proposes parameter estimators for quantities previously measured from off-line data techniques such as the Allan variance graph. Allan variance graphs have large off-line computational effort and data storage requirements. The technique proposed here requires no data storage and computational effort of O(100) calculations per data sample.
Resumo:
In this paper conditional hidden Markov model (HMM) filters and conditional Kalman filters (KF) are coupled together to improve demodulation of differential encoded signals in noisy fading channels. We present an indicator matrix representation for differential encoded signals and the optimal HMM filter for demodulation. The filter requires O(N3) calculations per time iteration, where N is the number of message symbols. Decision feedback equalisation is investigated via coupling the optimal HMM filter for estimating the message, conditioned on estimates of the channel parameters, and a KF for estimating the channel states, conditioned on soft information message estimates. The particular differential encoding scheme examined in this paper is differential phase shift keying. However, the techniques developed can be extended to other forms of differential modulation. The channel model we use allows for multiplicative channel distortions and additive white Gaussian noise. Simulation studies are also presented.
Resumo:
This research has successfully developed a novel synthetic structural health monitoring system model that is cost-effective and flexible in sensing and data acquisition; and robust in the structural safety evaluation aspect for the purpose of long-term and frequent monitoring of large-scale civil infrastructure during their service lives. Not only did it establish a real-world structural monitoring test-bed right at the heart of QUT Gardens Point Campus but it can also facilitate reliable and prompt protection for any built infrastructure system as well as the user community involved.
Resumo:
This paper discusses Compulsory Income Management (CIM) in Australia and the implications of technology backed forms of surveillance and increasingly conditional benefit payments. The CIM project raises important questions about requiring people to take greater responsibility for their personal behaviour when they no longer have control over key financial aspects of their lives. Some Indigenous communities have resisted the BasicsCard, because CIM was imposed with little prior consultation or subsequent independent evaluation. The compulsory income management of individuals by a paternalist welfare state contradicts and undermines the purported policy aims that they become less welfare dependent and more positively engaged with the world of paid employment and does little to address the growing condition of poverty in Australia.
Resumo:
This paper discusses proposed changes to the Australian welfare state in the Welfare Review chaired by Patrick McClure and launched by Kevin Andrews, Minister for Social Services in the Abbott government, in a recent address to the Sydney Institute. Andrews cited the Beveridge Report of 1942, referring to Lord William Beveridge as the “godfather of the British post-war welfare state”, commending him for putting forward a plan for a welfare state providing a minimal level of support, constituting a bare safety net, rather than “stifling civil society and personal responsibility” through generous provision. In line with a key TASA conference theme of challenging institutions and identifying social and political change at local and global levels, this paper examines both the Beveridge Report and the McClure Report, identifying key issues and themes of relevance to current times in Australia.
Resumo:
A tag-based item recommendation method generates an ordered list of items, likely interesting to a particular user, using the users past tagging behaviour. However, the users tagging behaviour varies in different tagging systems. A potential problem in generating quality recommendation is how to build user profiles, that interprets user behaviour to be effectively used, in recommendation models. Generally, the recommendation methods are made to work with specific types of user profiles, and may not work well with different datasets. In this paper, we investigate several tagging data interpretation and representation schemes that can lead to building an effective user profile. We discuss the various benefits a scheme brings to a recommendation method by highlighting the representative features of user tagging behaviours on a specific dataset. Empirical analysis shows that each interpretation scheme forms a distinct data representation which eventually affects the recommendation result. Results on various datasets show that an interpretation scheme should be selected based on the dominant usage in the tagging data (i.e. either higher amount of tags or higher amount of items present). The usage represents the characteristic of user tagging behaviour in the system. The results also demonstrate how the scheme is able to address the cold-start user problem.
Resumo:
As a key element in their response to new media forcing transformations in mass media and media use, newspapers have deployed various strategies to not only establish online and mobile products, and develop healthy business plans, but to set out to be dominant portals. Their response to change was the subject of an early investigation by one of the present authors (Keshvani 2000). That was part of a set of short studies inquiring into what impact new software applications and digital convergence might have on journalism practice (Tickle and Keshvani 2000), and also looking for demonstrations of the way that innovations, technologies and protocols then under development might produce a “wireless, streamlined electronic news production process (Tickle and Keshvani 2001).” The newspaper study compared the online products of The Age in Melbourne and the Straits Times in Singapore. It provided an audit of the Singapore and Australia Information and Communications Technology (ICT) climate concentrating on the state of development of carrier networks, as a determining factor in the potential strength of the two services with their respective markets. In the outcome, contrary to initial expectations, the early cable roll-out and extensive ‘wiring’ of the city in Singapore had not produced a level of uptake of Internet services as strong as that achieved in Melbourne by more ad hoc and varied strategies. By interpretation, while news websites and online content were at an early stage of development everywhere, and much the same as one another, no determining structural imbalance existed to separate these leading media participants in Australia and South-east Asia. The present research revisits that situation, by again studying the online editions of the two large newspapers in the original study, and one other, The Courier Mail, (recognising the diversification of types of product in this field, by including it as a representative of Newscorp, now a major participant). The inquiry works through the principle of comparison. It is an exercise in qualitative, empirical research that establishes a comparison between the situation in 2000 as described in the earlier work, and the situation in 2014, after a decade of intense development in digital technology affecting the media industries. It is in that sense a follow-up study on the earlier work, although this time giving emphasis to content and style of the actual products as experienced by their users. It compares the online and print editions of each of these three newspapers; then the three mastheads as print and online entities, among themselves; and finally it compares one against the other two, as representing a South-east Asian model and Australian models. This exercise is accompanied by a review of literature on the developments in ICT affecting media production and media organisations, to establish the changed context. The new study of the online editions is conducted as a systematic appraisal of the first level, or principal screens, of the three publications, over the course of six days (10-15.2.14 inclusive). For this, categories for analysis were made, through conducting a preliminary examination of the products over three days in the week before. That process identified significant elements of media production, such as: variegated sourcing of materials; randomness in the presentation of items; differential production values among media platforms considered, whether text, video or stills images; the occasional repurposing and repackaging of top news stories of the day and the presence of standard news values – once again drawn out of the trial ‘bundle’ of journalistic items. Reduced in this way the online artefacts become comparable with the companion print editions from the same days. The categories devised and then used in the appraisal of the online products have been adapted to print, to give the closest match of sets of variables. This device, to study the two sets of publications on like standards -- essentially production values and news values—has enabled the comparisons to be made. This comparing of the online and print editions of each of the three publications was set up as up the first step in the investigation. In recognition of the nature of the artefacts, as ones that carry very diverse information by subject and level of depth, and involve heavy creative investment in the formulation and presentation of the information; the assessment also includes an open section for interpreting and commenting on main points of comparison. This takes the form of a field for text, for the insertion of notes, in the table employed for summarising the features of each product, for each day. When the sets of comparisons as outlined above are noted, the process then becomes interpretative, guided by the notion of change. In the context of changing media technology and publication processes, what substantive alterations have taken place, in the overall effort of news organisations in the print and online fields since 2001; and in their print and online products separately? Have they diverged or continued along similar lines? The remaining task is to begin to make inferences from that. Will the examination of findings enforce the proposition that a review of the earlier study, and a forensic review of new models, does provide evidence of the character and content of change --especially change in journalistic products and practice? Will it permit an authoritative description on of the essentials of such change in products and practice? Will it permit generalisation, and provide a reliable base for discussion of the implications of change, and future prospects? Preliminary observations suggest a more dynamic and diversified product has been developed in Singapore, well themed, obviously sustained by public commitment and habituation to diversified online and mobile media services. The Australian products suggest a concentrated corporate and journalistic effort and deployment of resources, with a strong market focus, but less settled and ordered, and showing signs of limitations imposed by the delay in establishing a uniform, large broadband network. The scope of the study is limited. It is intended to test, and take advantage of the original study as evidentiary material from the early days of newspaper companies’ experimentation with online formats. Both are small studies. The key opportunity for discovery lies in the ‘time capsule’ factor; the availability of well-gathered and processed information on major newspaper company production, at the threshold of a transformational decade of change in their industry. The comparison stands to identify key changes. It should also be useful as a reference for further inquiries of the same kind that might be made, and for monitoring of the situation in regard to newspaper portals on line, into the future.
Resumo:
Active learning approaches reduce the annotation cost required by traditional supervised approaches to reach the same effectiveness by actively selecting informative instances during the learning phase. However, effectiveness and robustness of the learnt models are influenced by a number of factors. In this paper we investigate the factors that affect the effectiveness, more specifically in terms of stability and robustness, of active learning models built using conditional random fields (CRFs) for information extraction applications. Stability, defined as a small variation of performance when small variation of the training data or a small variation of the parameters occur, is a major issue for machine learning models, but even more so in the active learning framework which aims to minimise the amount of training data required. The factors we investigate are a) the choice of incremental vs. standard active learning, b) the feature set used as a representation of the text (i.e., morphological features, syntactic features, or semantic features) and c) Gaussian prior variance as one of the important CRFs parameters. Our empirical findings show that incremental learning and the Gaussian prior variance lead to more stable and robust models across iterations. Our study also demonstrates that orthographical, morphological and contextual features as a group of basic features play an important role in learning effective models across all iterations.