211 resultados para Binary Asteroids
Resumo:
This thesis examines the changing relationships between television, politics, audiences and the public sphere. Premised on the notion that mediated politics is now understood “in new ways by new voices” (Jones, 2005: 4), and appropriating what McNair (2003) calls a “chaos theory” of journalism sociology, this thesis explores how two different contemporary Australian political television programs (Sunrise and The Chaser’s War on Everything) are viewed, understood, and used by audiences. In analysing these programs from textual, industry and audience perspectives, this thesis argues that journalism has been largely thought about in overly simplistic binary terms which have failed to reflect the reality of audiences’ news consumption patterns. The findings of this thesis suggest that both ‘soft’ infotainment (Sunrise) and ‘frivolous’ satire (The Chaser’s War on Everything) are used by audiences in intricate ways as sources of political information, and thus these TV programs (and those like them) should be seen as legitimate and valuable forms of public knowledge production. It therefore might be more worthwhile for scholars to think about, research and teach journalism in the plural: as a series of complementary or antagonistic journalisms, rather than as a single coherent entity.
Resumo:
In this paper, the placement of sectionalizers, as well as, a cross-connection is optimally determined so that the objective function is minimized. The objective function employed in this paper consists of two main parts, the switch cost and the reliability cost. The switch cost is composed of the cost of sectionalizers and cross-connection and the reliability cost is assumed to be proportional to a reliability index, SAIDI. To optimize the allocation of sectionalizers and cross-connection problem realistically, the cost related to each element is considered as discrete. In consequence of binary variables for the availability of sectionalizers, the problem is extremely discrete. Therefore, the probability of local minimum risk is high and a heuristic-based optimization method is needed. A Discrete Particle Swarm Optimization (DPSO) is employed in this paper to deal with this discrete problem. Finally, a testing distribution system is used to validate the proposed method.
Resumo:
This thesis examines the new theatrical form of cyberformance (live performance by remote players using internet technologies) and contextualises it within the broader fields of networked performance, digital performance and theatre. Poststructuralist theories that contest the binary distinction between reality and representation provide the analytical foundation for the thesis. A critical reflexive methodological approach is undertaken in order to highlight three themes. First, the essential qualities and criteria of cyberformance are identified, and illustrated with examples from the early 1990s to the present day. Second, two cyberformance groups – the Plaintext Players and Avatar Body Collision – and UpStage, a purpose-built application for cyberformance, are examined in more detailed case studies. Third, the specifics of the cyberformance audience are explored and commonalities are identified between theatre and online culture. In conclusion, this thesis suggests that theatre and the internet have much to offer each other in this current global state of transition, and that cyberformance offers one means by which to facilitate the incorporation of new technologies into our lives.
Resumo:
Research has noted a ‘pronounced pattern of increase with increasing remoteness' of death rates in road crashes. However, crash characteristics by remoteness are not commonly or consistently reported, with definitions of rural and urban often relying on proxy representations such as prevailing speed limit. The current paper seeks to evaluate the efficacy of the Accessibility / Remoteness Index of Australia (ARIA+) to identifying trends in road crashes. ARIA+ does not rely on road-specific measures and uses distances to populated centres to attribute a score to an area, which can in turn be grouped into 5 classifications of increasing remoteness. The current paper uses applications of these classifications at the broad level of Australian Bureau of Statistics' Statistical Local Areas, thus avoiding precise crash locating or dedicated mapping software. Analyses used Queensland road crash database details for all 31,346 crashes resulting in a fatality or hospitalisation occurring between 1st July, 2001 and 30th June 2006 inclusive. Results showed that this simplified application of ARIA+ aligned with previous definitions such as speed limit, while also providing further delineation. Differences in crash contributing factors were noted with increasing remoteness such as a greater representation of alcohol and ‘excessive speed for circumstances.' Other factors such as the predominance of younger drivers in crashes differed little by remoteness classification. The results are discussed in terms of the utility of remoteness as a graduated rather than binary (rural/urban) construct and the potential for combining ARIA crash data with census and hospital datasets.
Resumo:
The dancing doctorate is an interrogative endeavour which can but nurture the art form and forge a beneficial dynamism between those who seek and those who assess the emerging knowledges of dance’. (Vincs, 2009) From 2006-2008 three dance academics from Perth, Brisbane and Melbourne undertook a research project entitled Dancing between Diversity and Consistency: Refining Assessment in Postgraduate Degrees in Dance, funded by the ALTC Priority Projects Program. Although assessment rather than supervision was the primary focus of this research, interviews with 40 examiner/supervisors, 7 research deans and 32 candidates across Australia and across the creative arts, primarily in dance, provide an insight into what might be considered best practice in preparing students for higher research degrees, and the challenges that embodied and experiential knowledges present for supervision. The study also gained the industry perspectives of dance professionals in a series of national forums in 5 cities, based around the value of higher degrees in dance. The qualitative data gathered from these two primary sources was coded and analysed using the NVivo system. Further perspectives were drawn from international consultant and dance researcher Susan Melrose, as well as recent publications in the field. Dance is a young addition to academia and consequently there tends to be a close liaison between the academy and the industry, with a relational fluidity that is both beneficial and problematic. This partially explains why dance research higher degrees are predominantly practice-led (or multi-modal, referring to those theses where practice comprises the substantial examinable component). As a physical, embodied art form, dance engages with the contested territory of legitimising alternative forms of knowledge that do not sit comfortably with accepted norms of research. In supporting research students engaged with dance practice, supervisors traverse the tricky terrain of balancing university academic requirements with studies that are emergent, not only in the practice and attendant theory but in their methodologies and open-ended outcomes; and in an art form in which originality and new knowledge also arises from collaborative creative processes. Formal supervisor accreditation through training is now mandatory in most Australian universities, but it tends to be generic and not address supervisory specificity. This paper offers the kind of alternative proposed by Edwards (2002) that improving postgraduate supervision will be effective if supervisors are empowered to generate their own standards and share best practice; in this case, in ways appropriate to the needs of their discipline and alternative modes of thesis presentation. In order to frame the qualities and processes conducive to this goal, this paper will draw on both the experiences of interviewees and on philosophical premises which underpin the research findings of our study. These include the ongoing challenge of dissolving the binary oppositions of theory and practice, especially in creative arts practice where theory resides in and emerges from the doing as much as in articulating reflection about the doing through what Melrose (2003) terms ‘mixed mode disciplinary practices’. In guiding practitioners through research higher degrees, how do supervisors deal with not only different forms of knowledge but indeed differing modes of knowledge? How can they navigate tensions that occur between the ‘incompatible competencies’ (Candlin, 2000) of the ‘spectating’ academic experts with their ‘irrepressible drive ... to inscribe, interpret, and hence to practise temporal closure’, and practitioner experts who create emergent works of ‘residual unfinishedness’ (Melrose 2006) which are not only embodied but ephemeral, as in the case of live performance?
Resumo:
This paper will examine the literature on ‘anorexia nervosa’, and argue that it is underpinned by three fundamental assumptions. First, ‘anorexia nervosa’ is a reflection of the mismatch between true ‘inner self’ and the external ‘false self’, the latter self being the distorted product of a male dominated society. Second, the explanation for the severe fasting practices constitutive of ‘anorexia nervosa’ (a new social problem) is to be found within the binary opposition of resistance/conformity to contemporary cultural expectations. Finally, ‘anorexia nervosa’ is a problem which exists in nature (i.e., independently of analysis). It was eventually discovered, named and explained. This paper will problematise each of these assumptions in turn, and in doing so, it will propose an alternative way of understanding contemporary fasting practices.
Resumo:
The prevalence and concentrations of Campylobacter jejuni, Salmonella spp. and enterohaemorrhagic E. coli (EHEC) were investigated in surface waters in Brisbane, Australia using quantitative PCR (qPCR) based methodologies. Water samples were collected from Brisbane City Botanic Gardens (CBG) Pond, and two urban tidal creeks (i.e., Oxley Creek and Blunder Creek). Of the 32 water samples collected, 8 (25%), 1 (3%), 9 (28%), 14 (44%), and 15 (47%) were positive for C. jejuni mapA, Salmonella invA, EHEC O157 LPS, EHEC VT1, and EHEC VT2 genes, respectively. The presence/absence of the potential pathogens did not correlate with either E. coli or enterococci concentrations as determined by binary logistic regression. In conclusion, the high prevalence, and concentrations of potential zoonotic pathogens along with the concentrations of one or more fecal indicators in surface water samples indicate a poor level of microbial quality of surface water, and could represent a significant health risk to users. The results from the current study would provide valuable information to the water quality managers in terms of minimizing the risk from pathogens in surface waters.
Resumo:
Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very difficult for a human operator to effectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identification at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the effective use of more advanced technologies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identification. Before an object can be tracked, it must be detected. Motion segmentation techniques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erroneous motion caused by noise and lighting effects, or due to the detection routines being unable to split occluded regions into their component objects. Particle filters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (often manual) detection to initialise the filter. Particle filters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle filter. A novel hybrid motion segmentation / optical flow algorithm, capable of simultaneously extracting multiple layers of foreground and optical flow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical flow is capable of extracting a moving object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and significant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle filter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benefit from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle filter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking systems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classification in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a significant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi-automated video processing and therefore improve security in areas under surveillance.
Resumo:
Quantitative Microbial Risk Assessment (QMRA) analysis was used to quantify the risk of infection associated with the exposure to pathogens from potable and non-potable uses of roof-harvested rainwater in South East Queensland (SEQ). A total of 84 rainwater samples were analysed for the presence of faecal indicators (using culture based methods) and zoonotic bacterial and protozoan pathogens using binary and quantitative PCR (qPCR). The concentrations of Salmonella invA, and Giardia lamblia β-giradin genes ranged from 65-380 genomic units/1000 mL and 9-57 genomic units/1000 mL of water, respectively. After converting gene copies to cell/cyst number, the risk of infection from G. lamblia and Salmonella spp. associated with the use of rainwater for bi-weekly garden hosing was calculated to be below the threshold value of 1 extra infection per 10,000 persons per year. However, the estimated risk of infection from drinking the rainwater daily was 44-250 (for G. lamblia) and 85-520 (for Salmonella spp.) infections per 10,000 persons per year. Since this health risk seems higher than that expected from the reported incidences of gastroenteritis, the assumptions used to estimate these infection risks are critically discussed. Nevertheless, it would seem prudent to disinfect rainwater for potable use.
Resumo:
This paper presents a reliability-based reconfiguration methodology for power distribution systems. Probabilistic reliability models of the system components are considered and Monte Carlo method is used while evaluating the reliability of the distribution system. The reconfiguration is aimed at maximizing the reliability of the power supplied to the customers. A binary particle swarm optimization (BPSO) algorithm is used as a tool to determine the optimal configuration of the sectionalizing and tie switches in the system. The proposed methodology is applied on a modified IEEE 13-bus distribution system.
Resumo:
The inquiry documented in this thesis is located at the nexus of technological innovation and traditional schooling. As we enter the second decade of a new century, few would argue against the increasingly urgent need to integrate digital literacies with traditional academic knowledge. Yet, despite substantial investments from governments and businesses, the adoption and diffusion of contemporary digital tools in formal schooling remain sluggish. To date, research on technology adoption in schools tends to take a deficit perspective of schools and teachers, with the lack of resources and teacher ‘technophobia’ most commonly cited as barriers to digital uptake. Corresponding interventions that focus on increasing funding and upskilling teachers, however, have made little difference to adoption trends in the last decade. Empirical evidence that explicates the cultural and pedagogical complexities of innovation diffusion within long-established conventions of mainstream schooling, particularly from the standpoint of students, is wanting. To address this knowledge gap, this thesis inquires into how students evaluate and account for the constraints and affordances of contemporary digital tools when they engage with them as part of their conventional schooling. It documents the attempted integration of a student-led Web 2.0 learning initiative, known as the Student Media Centre (SMC), into the schooling practices of a long-established, high-performing independent senior boys’ school in urban Australia. The study employed an ‘explanatory’ two-phase research design (Creswell, 2003) that combined complementary quantitative and qualitative methods to achieve both breadth of measurement and richness of characterisation. In the initial quantitative phase, a self-reported questionnaire was administered to the senior school student population to determine adoption trends and predictors of SMC usage (N=481). Measurement constructs included individual learning dispositions (learning and performance goals, cognitive playfulness and personal innovativeness), as well as social and technological variables (peer support, perceived usefulness and ease of use). Incremental predictive models of SMC usage were conducted using Classification and Regression Tree (CART) modelling: (i) individual-level predictors, (ii) individual and social predictors, and (iii) individual, social and technological predictors. Peer support emerged as the best predictor of SMC usage. Other salient predictors include perceived ease of use and usefulness, cognitive playfulness and learning goals. On the whole, an overwhelming proportion of students reported low usage levels, low perceived usefulness and a lack of peer support for engaging with the digital learning initiative. The small minority of frequent users reported having high levels of peer support and robust learning goal orientations, rather than being predominantly driven by performance goals. These findings indicate that tensions around social validation, digital learning and academic performance pressures influence students’ engagement with the Web 2.0 learning initiative. The qualitative phase that followed provided insights into these tensions by shifting the analytics from individual attitudes and behaviours to shared social and cultural reasoning practices that explain students’ engagement with the innovation. Six indepth focus groups, comprising 60 students with different levels of SMC usage, were conducted, audio-recorded and transcribed. Textual data were analysed using Membership Categorisation Analysis. Students’ accounts converged around a key proposition. The Web 2.0 learning initiative was useful-in-principle but useless-in-practice. While students endorsed the usefulness of the SMC for enhancing multimodal engagement, extending peer-topeer networks and acquiring real-world skills, they also called attention to a number of constraints that obfuscated the realisation of these design affordances in practice. These constraints were cast in terms of three binary formulations of social and cultural imperatives at play within the school: (i) ‘cool/uncool’, (ii) ‘dominant staff/compliant student’, and (iii) ‘digital learning/academic performance’. The first formulation foregrounds the social stigma of the SMC among peers and its resultant lack of positive network benefits. The second relates to students’ perception of the school culture as authoritarian and punitive with adverse effects on the very student agency required to drive the innovation. The third points to academic performance pressures in a crowded curriculum with tight timelines. Taken together, findings from both phases of the study provide the following key insights. First, students endorsed the learning affordances of contemporary digital tools such as the SMC for enhancing their current schooling practices. For the majority of students, however, these learning affordances were overshadowed by the performative demands of schooling, both social and academic. The student participants saw engagement with the SMC in-school as distinct from, even oppositional to, the conventional social and academic performance indicators of schooling, namely (i) being ‘cool’ (or at least ‘not uncool’), (ii) sufficiently ‘compliant’, and (iii) achieving good academic grades. Their reasoned response therefore, was simply to resist engagement with the digital learning innovation. Second, a small minority of students seemed dispositionally inclined to negotiate the learning affordances and performance constraints of digital learning and traditional schooling more effectively than others. These students were able to engage more frequently and meaningfully with the SMC in school. Their ability to adapt and traverse seemingly incommensurate social and institutional identities and norms is theorised as cultural agility – a dispositional construct that comprises personal innovativeness, cognitive playfulness and learning goals orientation. The logic then is ‘both and’ rather than ‘either or’ for these individuals with a capacity to accommodate both learning and performance in school, whether in terms of digital engagement and academic excellence, or successful brokerage across multiple social identities and institutional affiliations within the school. In sum, this study takes us beyond the familiar terrain of deficit discourses that tend to blame institutional conservatism, lack of resourcing and teacher resistance for low uptake of digital technologies in schools. It does so by providing an empirical base for the development of a ‘third way’ of theorising technological and pedagogical innovation in schools, one which is more informed by students as critical stakeholders and thus more relevant to the lived culture within the school, and its complex relationship to students’ lives outside of school. It is in this relationship that we find an explanation for how these individuals can, at the one time, be digital kids and analogue students.
Resumo:
The highly variable flagellin-encoding flaA gene has long been used for genotyping Campylobacter jejuni and Campylobacter coli. High-resolution melting (HRM) analysis is emerging as an efficient and robust method for discriminating DNA sequence variants. The objective of this study was to apply HRM analysis to flaA-based genotyping. The initial aim was to identify a suitable flaA fragment. It was found that the PCR primers commonly used to amplify the flaA short variable repeat (SVR) yielded a mixed PCR product unsuitable for HRM analysis. However, a PCR primer set composed of the upstream primer used to amplify the fragment used for flaA restriction fragment length polymorphism (RFLP) analysis and the downstream primer used for flaA SVR amplification generated a very pure PCR product, and this primer set was used for the remainder of the study. Eighty-seven C. jejuni and 15 C. coli isolates were analyzed by flaA HRM and also partial flaA sequencing. There were 47 flaA sequence variants, and all were resolved by HRM analysis. The isolates used had previously also been genotyped using single-nucleotide polymorphisms (SNPs), binary markers, CRISPR HRM, and flaA RFLP. flaAHRManalysis provided resolving power multiplicative to the SNPs, binary markers, and CRISPR HRM and largely concordant with the flaA RFLP. It was concluded that HRM analysis is a promising approach to genotyping based on highly variable genes.
Resumo:
The purpose of this paper is to determine the prevalence of the toxic shock toxin gene (tst) and to enumerate the circulating strains of methicillin-sensitive Staphylococcus aureus (MSSA) and methicillin-resistant S. aureus (MRSA) in Australian isolates collected over two decades. The aim was to subtype these strains using the binary genes pvl, cna, sdrE, pUB110 and pT181. Isolates were assayed using real-time polymerase chain reaction (PCR) for mecA, nuc, 16 S rRNA, eight single-nucleotide polymorphisms (SNPs) and for five binary genes. Two realtime PCR assays were developed for tst. The 90 MRSA isolates belonged to CC239 (39 in 1989, 38 in 1996 and ten in 2003), CC1 (two in 2003) and CC22 (one in 2003). The majority of the 210 MSSA isolates belonged to CC1 (26), CC5 (24) and CC78 (23). Only 18 isolates were tst-positive and only 15 were pvl-positive. Nine MSSA isolates belonged to five binary types of ST93, including two pvlpositive types. The proportion of tst-positive and pvl-positive isolates was low and no significant increase was demonstrated. Dominant MSSA clonal complexes were similar to those seen elsewhere, with the exception of CC78. CC239 MRSA (AUS-2/3) was the predominant MRSA but decreased significantly in prevalence, while CC22 (EMRSA-15) and CC1 (WA-1) emerged. Genetically diverse ST93 MSSA predated the emergence of ST93- MRSA (the Queensland clone).
Resumo:
The word “queer” is a slippery one; its etymology is uncertain, and academic and popular usage attributes conflicting meanings to the word. By the mid-nineteenth century, “queer” was used as a pejorative term for a (male) homosexual. This negative connotation continues when it becomes a term for homophobic abuse. In recent years, “queer” has taken on additional uses: as an all encompassing term for culturally marginalised sexualities – gay, lesbian, trans, bi, and intersex (“GLBTI”) – and as a theoretical strategy which deconstructs binary oppositions that govern identity formation. Tracing its history, the Oxford English Dictionary notes that the earliest references to “queer” may have appeared in the sixteenth century. These early examples of queer carried negative connotations such as “vulgar,” “bad,” “worthless,” “strange,” or “odd” and such associations continued until the mid-twentieth century. The early nineteenth century, and perhaps earlier, employed “queer” as a verb, meaning to “to put out of order,” “to spoil”, “to interfere with”. The adjectival form also began to emerge during this time to refer to a person’s condition as being “not normal,” “out of sorts” or to cause a person “to feel queer” meaning “to disconcert, perturb, unsettle.” According to Eve Sedgwick (1993), “the word ‘queer’ itself means across – it comes from the Indo-European root – twerkw, which also yields the German quer (traverse), Latin torquere (to twist), English athwart . . . it is relational and strange.” Despite the gaps in the lineage and changes in usage, meaning and grammatical form, “queer” as a political and theoretical strategy has benefited from its diverse origins. It refuses to settle comfortably into a single classification, preferring instead to traverse several categories that would otherwise attempt to stabilise notions of chromosomal sex, gender and sexuality.
Resumo:
Community-associated methicillin-resistant Staphylococcus aureus (CA-MRSA) infections are emerging in southeast Queensland, Australia, but the incidence of carriage of CA-MRSA strains is unknown. The aim of this study was to assess the nasal carriage rate of S. aureus, including CA-MRSA strains, in the general adult population of southeast Queensland. 396 patients presenting to general practices in two Brisbane suburbs and 303 volunteers randomly selected from the electoral rolls in the same suburbs completed a medical questionnaire and had nasal swabs performed for S. aureus. All isolates of S. aureus underwent antibiotic susceptibility testing and single-nucleotide polymorphism (SNP) and binary typing, including determination of Panton–Valentine leukocidin (PVL). The nasal carriage rate of methicillin-susceptible S. aureus (MSSA) was 202/699 (28%), a rate similar to that found in other community-based nasal carriage studies. According to multivariate analysis, nasal carriage of S. aureus was associated with male sex, young adult age group and Caucasian ethnicity. Only two study isolates (one MSSA and one CA-MRSA) carried PVL. The nasal carriage rate of MRSA was low, at 5/699 (0.7%), and only two study participants (0.3%) had CA-MRSA strains. CA-MRSA is an emerging cause of infection in southeast Queensland, but as yet the incidence of carriage of CA-MRSA in the general community is low.