919 resultados para recent 300 years


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Contemporary mainstream theatre audiences observe etiquette strictures that regulate behaviour. As Baz Kershaw argues, “the idea of the passive audience for performance has been associated usually with mainstream theatre.” This paper explores a mainstream event where the extant contract of audience silence was replaced with a raw, emotional audience response that continued into the post-performance discussion. William Gibson’s The Miracle Worker was performed by Crossbow Productions at the Brisbane Powerhouse to an audience made up of mainstream theatre patrons and people living with hearing and visual impairment. Various elements such as shadow signing and tactile tours worked metatheatrically and self-referentially to heighten audience awareness. During the performances the verbal and non-verbal responses of the audience were so pervasive that the audience became not only co-creators of the performance text but performers of a rich audience text that had a dramatic impact on the theatrical experience for audience and actors alike. During the post-performance discussion the audience performers spilled onto the stage interacting with the actors, extending the pleasure of the experience. This paper discusses how in privileging the audience as co-creators and performers, the chasm between stage and audience was bridged. The audiences’ performance changed, enriched and created new meanings for each performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Lane Change Test (LCT) is one of the growing number of methods developed to quantify driving performance degradation brought about by the use of in-vehicle devices. Beyond its validity and reliability, for such a test to be of practical use, it must also be sensitive to the varied demands of individual tasks. The current study evaluated the ability of several recent LCT lateral control and event detection parameters to discriminate between visual-manual and cognitive surrogate In-Vehicle Information System tasks with different levels of demand. Twenty-seven participants (mean age 24.4 years) completed a PC version of the LCT while performing visual search and math problem solving tasks. A number of the lateral control metrics were found to be sensitive to task differences, but the event detection metrics were less able to discriminate between tasks. The mean deviation and lane excursion measures were able to distinguish between the visual and cognitive tasks, but were less sensitive to the different levels of task demand. The other LCT metrics examined were less sensitive to task differences. A major factor influencing the sensitivity of at least some of the LCT metrics could be the type of lane change instructions given to participants. The provision of clear and explicit lane change instructions and further refinement of its metrics will be essential for increasing the utility of the LCT as an evaluation tool.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Camera calibration information is required in order for multiple camera networks to deliver more than the sum of many single camera systems. Methods exist for manually calibrating cameras with high accuracy. Manually calibrating networks with many cameras is, however, time consuming, expensive and impractical for networks that undergo frequent change. For this reason, automatic calibration techniques have been vigorously researched in recent years. Fully automatic calibration methods depend on the ability to automatically find point correspondences between overlapping views. In typical camera networks, cameras are placed far apart to maximise coverage. This is referred to as a wide base-line scenario. Finding sufficient correspondences for camera calibration in wide base-line scenarios presents a significant challenge. This thesis focuses on developing more effective and efficient techniques for finding correspondences in uncalibrated, wide baseline, multiple-camera scenarios. The project consists of two major areas of work. The first is the development of more effective and efficient view covariant local feature extractors. The second area involves finding methods to extract scene information using the information contained in a limited set of matched affine features. Several novel affine adaptation techniques for salient features have been developed. A method is presented for efficiently computing the discrete scale space primal sketch of local image features. A scale selection method was implemented that makes use of the primal sketch. The primal sketch-based scale selection method has several advantages over the existing methods. It allows greater freedom in how the scale space is sampled, enables more accurate scale selection, is more effective at combining different functions for spatial position and scale selection, and leads to greater computational efficiency. Existing affine adaptation methods make use of the second moment matrix to estimate the local affine shape of local image features. In this thesis, it is shown that the Hessian matrix can be used in a similar way to estimate local feature shape. The Hessian matrix is effective for estimating the shape of blob-like structures, but is less effective for corner structures. It is simpler to compute than the second moment matrix, leading to a significant reduction in computational cost. A wide baseline dense correspondence extraction system, called WiDense, is presented in this thesis. It allows the extraction of large numbers of additional accurate correspondences, given only a few initial putative correspondences. It consists of the following algorithms: An affine region alignment algorithm that ensures accurate alignment between matched features; A method for extracting more matches in the vicinity of a matched pair of affine features, using the alignment information contained in the match; An algorithm for extracting large numbers of highly accurate point correspondences from an aligned pair of feature regions. Experiments show that the correspondences generated by the WiDense system improves the success rate of computing the epipolar geometry of very widely separated views. This new method is successful in many cases where the features produced by the best wide baseline matching algorithms are insufficient for computing the scene geometry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years the concepts of social inclusion and exclusion have become part of the repertoire of third-way policy discourses that seek to respond to complex socioeconomic problems through processes of 'joined-up' and 'integrated' governance. As part of this approach, we are witnessing an increased focus on the role of the third sector in facilitating social inclusion. While the push towards governing through networks has gained moral legitimacy in some areas of social policy, the practical legitimacy - that is, whether these new approaches actually produce demonstrably better outcomes than more traditional policy approaches - remains largely unsubstantiated. This article contributes to the evidence base, by examining the social-inclusion impacts of eleven community enterprises operating in Victoria, and to the wider available evidence on the social, economic and civic effects of social enterprise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Interactional competence has emerged as a focal point for language testing researchers in recent years. In spoken communication involving two or more interlocutors, the co-construction of discourse is central to successful interaction. The acknowledgement of co-construction has led to concern over the impact of the interlocutor and the separability of performances in speaking tests involving interaction. The purpose of this article is to review recent studies of direct relevance to the construct of interactional competence and its operationalisation by raters in the context of second language speaking tests. The review begins by tracing the emergence of interaction as a criterion in speaking tests from a theoretical perspective, and then focuses on research salient to interactional effectiveness that has been carried out in the context of language testing interviews and group and paired speaking tests.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A recent decision by the Australian High Court means that, unless faculty are bound by an assignment or intellectual property (IP) policy, they may own inventions resulting from their research. Thirty years after its introduction, the US Bayh-Dole Act, which vests ownership of employee inventions in the employer university or research organization, has become a model for commercialization around the world. In Australia, despite recommendations that a Bayh-Dole style regime be adopted, the recent decision in University of Western Australia (UWA) v Gray1 has moved the default legal position in a diametrically opposite direction. A key focus of the debate was whether faculty’s duty to carry out research also encompasses a duty to invent. Late last year, the Full Federal Court confirmed a lower court ruling that it does not, and this year the High Court refused leave to appeal (denied certiorari). Thus, Gray stands as Australia’s most faculty-friendly authority to date.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The topics of corruption and tax evasion have attracted significant attention in the literature in recent years. We build on that literature by investigating empirically: (1) whether attitudes toward corruption and tax evasion vary systematically with gender and (2) whether gender differences decline as men and women face similar opportunities for illicit behavior. We use data on eight Western European countries from the World Values Survey and the European Values Survey. The results reveal significantly greater aversion to corruption and tax evasion among women. This holds across countries and time, and across numerous empirical specifications. (JEL H260, D730, J160, Z130)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years the development and use of crash prediction models for roadway safety analyses have received substantial attention. These models, also known as safety performance functions (SPFs), relate the expected crash frequency of roadway elements (intersections, road segments, on-ramps) to traffic volumes and other geometric and operational characteristics. A commonly practiced approach for applying intersection SPFs is to assume that crash types occur in fixed proportions (e.g., rear-end crashes make up 20% of crashes, angle crashes 35%, and so forth) and then apply these fixed proportions to crash totals to estimate crash frequencies by type. As demonstrated in this paper, such a practice makes questionable assumptions and results in considerable error in estimating crash proportions. Through the use of rudimentary SPFs based solely on the annual average daily traffic (AADT) of major and minor roads, the homogeneity-in-proportions assumption is shown not to hold across AADT, because crash proportions vary as a function of both major and minor road AADT. For example, with minor road AADT of 400 vehicles per day, the proportion of intersecting-direction crashes decreases from about 50% with 2,000 major road AADT to about 15% with 82,000 AADT. Same-direction crashes increase from about 15% to 55% for the same comparison. The homogeneity-in-proportions assumption should be abandoned, and crash type models should be used to predict crash frequency by crash type. SPFs that use additional geometric variables would only exacerbate the problem quantified here. Comparison of models for different crash types using additional geometric variables remains the subject of future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Following the judgement of the High Court in Tabet v Gett [2010]HCA 12 handed down on 21 April 2010 it appears that in Australia there is now very limited scope for recovery in negligence for the loss of a chance of a better medical outcome.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bioethics committees are the focus of international scrutiny,particularly in relation to their application of the principle of beneficence,ensuring that risks incurred in research are outweighed by benefits to those involved directly and to the broader society. Beneficence, in turn, has become an international focus in research with young children, who hitherto had been rarely seen or heard in their own right in research.Twenty years ago, The United Nations Convention on the Rights of the Child 1989 raised global awareness of children’s human rights to both participation and protection, and articulation of children’s rights came to inform understandings of young children’s rights in research. In the intervening period, countries such as Australia came to favour child protection and risk minimisation in research over the notion of children’s bone fide participation in research. A key element of the protection regime was the theoretical understanding of young children as developmentally unable and, therefore, unfit to understand, consent to and fully participate as research participants. This understanding has been challenged in recent decades by new theoretical understandings of children’s competence, where children can be seen to demonstrate competence, even at an early age, in consenting to, participating in and withdrawing from research. The paper draws on these understandings to provide insights for human research gatekeepers, such as bioethics committees, to deal with the challenges of research with young children and to realize the benefits that may accrue to children in research.