847 resultados para Classification Methods
Resumo:
A recent production of Nicholson’s Shadowlands at the Brisbane Powerhouse could have included two advertising lines: “Outspoken American-Jewish poet meets conservative British Oxford scholar” and “Emotive American Method trained actor meets contained British trained actor.” While the fusion of acting methodologies in intercultural acting has been discussed at length, little discussion has focussed on the juxtaposition of diverse acting styles in production in mainstream theatre. This paper explores how the permutation of American Method acting and a more traditional British conservatory acting in Crossbow’s August 2010 production of Shadowlands worked to add extra layers of meaning to the performance text. This sometimes inimical relationship between two acting styles had its beginnings in the rehearsal room and continued onstage. Audience reception to the play in post-performance discussions revealed the audience’s acute awareness of the transatlantic cultural tensions on stage. On one occasion, this resulted in a heated debate on cultural expression, continuing well after the event, during which audience members became co-performers in the cultural discourses of the play.
Resumo:
This article outlines the key recommendations of the Australian Law Reform Commission’s review of the National Classification Scheme, as outlined in its report Classification – Content Regulation and Convergent Media (ALRC, 2012). It identifies key contextual factors that underpin the need for reform of media classification laws and policies, including the fragmentation of regulatory responsibilities and the convergence of media platforms, content and services, as well as discussing the ALRC’s approach to law reform.
Resumo:
The purpose of this paper is to report on a methods research project investigating the evaluation of diverse teaching practice in Higher Education. The research method is a single site case study of an Australian university with data collected through published documents, surveys, interviews and focus groups. This project provides evidence of the wide variety of evaluation practice and diverse teaching practice across the university. This breadth identifies the need for greater flexibility of evaluation processes, tools and support to assist teaching staff to evaluate their diverse teaching practice. The employment opportunities for academics benchmark the university nationally and position the case study in the field. Finally this reaffirms the institutional responsibility for services to support teaching staff in an ongoing manner.
Resumo:
Every day inboxes are being flooded with invitations to invest money in overseas schemes, notifications of overseas lottery wins and inheritances, as well as emails from banks and other institutions asking for customers to confirm information about their identity and account details. While these requests may seem outrageous, many believe the request to be true and respond, through the sending of money or personal details. This can have devastating consequences, financially, emotionally and physically. While enforcement action is important, greater success is likely to come in the area of prevention, which avoids victim losses in the first place. Considerable victim support is also required by victims who have suffered significant losses, in trying to get their lives back on track. This project examined fraud prevention strategies and support services for victims of online fraud across the United Kingdom, United States of America and Canada. While much work has already been undertaken in Queensland, there is considerable room for improvement and a great deal can be learnt from these overseas jurisdictions. There are several examples of innovative and effective responses, particularly in the area of victim support, that are highlighted throughout this report. It is advocated that Australia can continue to improve its position regarding the prevention and support of online fraud victims, by applying the knowledge and expertise learnt overseas to a local context.
Resumo:
Purpose: The prevalence of refractive errors in children has been extensively researched. Comparisons between studies can, however, be compromised because of differences between accommodation control methods and techniques used for measuring refractive error. The aim of this study was to compare spherical refractive error results obtained at baseline and using two different accommodation control methods – extended optical fogging and cycloplegia, for two measurement techniques – autorefraction and retinoscopy. Methods: Participants comprised twenty-five school children aged between 6 and 13 years (mean age: 9.52 ± 2.06 years). The refractive error of one eye was measured at baseline and again under two different accommodation control conditions: extended optical fogging (+2.00DS for 20 minutes) and cycloplegia (1% cyclopentolate). Autorefraction and retinoscopy were both used to measure most plus spherical power for each condition. Results: A significant interaction was demonstrated between measurement technique and accommodation control method (p = 0.036), with significant differences in spherical power evident between accommodation control methods for each of the measurement techniques (p < 0.005). For retinoscopy, refractive errors were significantly more positive for cycloplegia compared to optical fogging, which were in turn significantly more positive than baseline, while for autorefraction, there were significant differences between cycloplegia and extended optical fogging and between cycloplegia and baseline only. Conclusions: Determination of refractive error under cycloplegia elicits more plus than using extended optical fogging as a method to relax accommodation. These findings support the use of cycloplegic refraction compared with extended optical fogging as a means of controlling accommodation for population based refractive error studies in children.
Resumo:
Background: To derive preference-based measures from various condition-specific descriptive health-related quality of life (HRQOL) measures. A general 2-stage method is evolved: 1) an item from each domain of the HRQOL measure is selected to form a health state classification system (HSCS); 2) a sample of health states is valued and an algorithm derived for estimating the utility of all possible health states. The aim of this analysis was to determine whether confirmatory or exploratory factor analysis (CFA, EFA) should be used to derive a cancer-specific utility measure from the EORTC QLQ-C30. Methods: Data were collected with the QLQ-C30v3 from 356 patients receiving palliative radiotherapy for recurrent or metastatic cancer (various primary sites). The dimensional structure of the QLQ-C30 was tested with EFA and CFA, the latter based on a conceptual model (the established domain structure of the QLQ-C30: physical, role, emotional, social and cognitive functioning, plus several symptoms) and clinical considerations (views of both patients and clinicians about issues relevant to HRQOL in cancer). The dimensions determined by each method were then subjected to item response theory, including Rasch analysis. Results: CFA results generally supported the proposed conceptual model, with residual correlations requiring only minor adjustments (namely, introduction of two cross-loadings) to improve model fit (increment χ2(2) = 77.78, p < .001). Although EFA revealed a structure similar to the CFA, some items had loadings that were difficult to interpret. Further assessment of dimensionality with Rasch analysis aligned the EFA dimensions more closely with the CFA dimensions. Three items exhibited floor effects (>75% observation at lowest score), 6 exhibited misfit to the Rasch model (fit residual > 2.5), none exhibited disordered item response thresholds, 4 exhibited DIF by gender or cancer site. Upon inspection of the remaining items, three were considered relatively less clinically important than the remaining nine. Conclusions: CFA appears more appropriate than EFA, given the well-established structure of the QLQ-C30 and its clinical relevance. Further, the confirmatory approach produced more interpretable results than the exploratory approach. Other aspects of the general method remain largely the same. The revised method will be applied to a large number of data sets as part of the international and interdisciplinary project to develop a multi-attribute utility instrument for cancer (MAUCa).
Resumo:
Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.
Resumo:
Mixed methods research is the use of qualitative and quantitative methods in the same study to gain a more rounded and holistic understanding of the phenomena under investigation. This type of research approach is gaining popularity in the nursing literature as a way to understand the complexity of nursing care and as a means to enhance evidenced-based practice. This paper introduces nephrology nurses to mixed methods research, its terminology and application to nephrology nursing. Five common mixed methods designs will be described highlighting the purposes, strengths and weaknesses of each design. Examples of mixed methods research will be given to illustrate the wide application of mixed methods research to nursing and its usefulness in nephrology nursing research.
Performance of elite seated discus throwers in F30s classes : part II: does feet positioning matter?
Resumo:
Background: Studies on the relationship between performance and design of the throwing frame have been limited. Part I provided only a description of the whole body positioning. Objectives: The specific objectives were (a) to benchmark feet positioning characteristics (i.e. position, spacing and orientation) and (b) to investigate the relationship between performance and these characteristics for male seated discus throwers in F30s classes. Study Design: Descriptive analysis. Methods: A total of 48 attempts performed by 12 stationary discus throwers in F33 and F34 classes during seated discus throwing event of 2002 International Paralympic Committee Athletics World Championships were analysed in this study. Feet positioning was characterised by tridimensional data of the front and back feet position as well as spacing and orientation corresponding to the distance between and the angle made by both feet, respectively. Results: Only 4 of 30 feet positioning characteristics presented a coefficient correlation superior to 0.5, including the feet spacing on mediolateral and anteroposterior axes in F34 class as well as the back foot position and feet spacing on mediolateral axis in F33 class. Conclusions: This study provided key information for a better understanding of the interaction between throwing technique of elite seated throwers and their throwing frame.
Resumo:
The authors present a qualitative and quantitative comparison of various similarity measures that form the kernel of common area-based stereo-matching systems. The authors compare classical difference and correlation measures as well as nonparametric measures based on the rank and census transforms for a number of outdoor images. For robotic applications, important considerations include robustness to image defects such as intensity variation and noise, the number of false matches, and computational complexity. In the absence of ground truth data, the authors compare the matching techniques based on the percentage of matches that pass the left-right consistency test. The authors also evaluate the discriminatory power of several match validity measures that are reported in the literature for eliminating false matches and for estimating match confidence. For guidance applications, it is essential to have and estimate of confidence in the three-dimensional points generated by stereo vision. Finally, a new validity measure, the rank constraint, is introduced that is capable of resolving ambiguous matches for rank transform-based matching.
Mixed methods research approach to the development and review of competency standards for dietitians
Resumo:
Aim: Competency standards support a range of professional activities including the accreditation of university courses. Reviewing these standards is essential to ensure universities continue to produce well equipped graduates, who can meet the challenge of changing workforce requirements. This paper has two aims: a) to provide an overview of the methodological approaches utilised for compilation and review of the Competency Standards for Dietetics and b) to evaluate the Dietitians Association of Australia’s Competency Standards and capture emerging and contemporary dietetic practice. Methods: A literature review of the methods used to develop Competency Standards for dietitians in Australia, including entry level, advanced level and DAA Fellow competencies and other specific areas of competency, such as public health nutrition and nutrition education is outlined and compared to other allied health professions. The mixed methods methodology used in the most recent review is described in more detail. Results: The history of Dietetic Competency Standards development and review in Australia is compared to dietetic Competency Standards internationally and within other health professions in Australia. The political context in which these standards have been developed in Australia and which has determined their format is also discussed. The results of the most recent Competency Standards review are reported to highlight emerging practice in Australia. Conclusion: The mixed methods approach used in this review provides rich data about contemporary dietetic practice. Our view supports a planned review of all Competency Standards to ensure practice informs education and credentialling and we recommend the Dietitians Association of Australia consider this in future
Resumo:
Nutritional status in people with Parkinson’s disease (PD) has previously been assessed in a number of ways including BMI, % weight loss and the Mini-Nutritional Assessment(MNA). The symptoms of the disease and the side effects of medication used to manage them result in a number of nutrition impact symptoms that can negatively influence intake. These include chewing and swallowing difficulties, lack of appetite, nausea, and taste and smell changes, among others. Community-dwelling people with PD, aged >18 years, were recruited (n=97, 61 M, 36 F). The Patient-Generated Subjective Global Assessment(PG-SGA) and (MNA) were used to assess nutritional status. Weight, height, mid-arm circumference(MAC) and calf circumference were measured. Based on SGA, 16(16.5%) were moderately malnourished (SGA B) while none were severely malnourished (SGA C). The MNA identified 2(2.0%) as malnourished and 22(22.7%) as at risk of malnutrition. Mean MNA scores were different between the three groups,F(2,37)=7.30,p<.05 but not different between SGA B (21.0(2.9)) and MNA at risk (21.8(1.4)) participants. MAC and calf circumference were also different between the three groups,F(2,37)=5.51,p<.05 and F(2,37)=15.33,p<.05 but not between the SGA B (26.2(4.2), 33.3(2.8)) and MNA at risk (28.4(5.6), 36.4(4.7)) participants. The MNA results are similar to other PD studies using MNA where prevalence of malnutrition was between 0-2% with 20-33% at risk of malnutrition. In this population, the PG-SGA may be more sensitive to assessing malnutrition where nutrition impact symptoms influence intake. With society’s increasing body size, it might also be more appropriate as it does not rely on MAC and calf circumference measures.
Resumo:
With the increasing number of stratospheric particles available for study (via the U2 and/or WB57F collections), it is essential that a simple, yet rational, classification scheme be developed for general use. Such a scheme should be applicable to all particles collected from the stratosphere, rather than limited to only extraterrestial or chemical sub-groups. Criteria for the efficacy of such a scheme would include: (a) objectivity , (b) ease of use, (c) acceptance within the broader scientific community and (d) how well the classification provides intrinsic categories which are consistent with our knowledge of particle types present in the stratosphere.
Resumo:
Several investigators have recently proposed classification schemes for stratospheric dust particles [1-3]. In addition, extraterrestrial materials within stratospheric dust collections may be used as a measure of micrometeorite flux [4]. However, little attention has been given to the problems of the stratospheric collection as a whole. Some of these problems include: (a) determination of accurate particle abundances at a given point in time; (b) the extent of bias in the particle selection process; (c) the variation of particle shape and chemistry with size; (d) the efficacy of proposed classification schemes and (e) an accurate determination of physical parameters associated with the particle collection process (e.g. minimum particle size collected, collection efficiency, variation of particle density with time). We present here preliminary results from SEM, EDS and, where appropriate, XRD analysis of all of the particles from a collection surface which sampled the stratosphere between 18 and 20km in altitude. Determinations of particle densities from this study may then be used to refine models of the behavior of particles in the stratosphere [5].