983 resultados para Andrew


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: In 1970, Enright observed a distortion of perceived driving speed, induced by monocular application of a neutral density (ND) filter. If a driver looks out of the right side of a vehicle with a filter over the right eye, the driver perceives a reduction of the vehicle’s apparent velocity, while applying a ND filter over the left eye increases the vehicle’s apparent velocity. The purpose of the current study was to provide the first empirical measurements of the Enright phenomenon. Methods: Ten experienced drivers were tested and drove an automatic sedan on a closed road circuit. Filters (0.9 ND) were placed over the left, right or both eyes during a driving run, in addition to a control condition with no filters in place. Subjects were asked to look out of the right side of the car and adjust their driving speed to either 40 km/h or 60 km/h. Results: Without a filter or with both eyes filtered subjects showed good estimation of speed when asked to travel at 60 km/h but travelled a mean of 12 to 14 km/h faster than the requested 40 km/h. Subjects travelled faster than these baselines by a mean of 7 to 9 km/h (p < 0.001) with the filter over their right eye, and 3 to 5 km/h slower with the filter over their left eye (p < 0.05). Conclusions: The Enright phenomenon causes significant and measurable distortions of perceived driving speed under realworld driving conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper discusses the use of models in automatic computer forensic analysis, and proposes and elaborates on a novel model for use in computer profiling, the computer profiling object model. The computer profiling object model is an information model which models a computer as objects with various attributes and inter-relationships. These together provide the information necessary for a human investigator or an automated reasoning engine to make judgements as to the probable usage and evidentiary value of a computer system. The computer profiling object model can be implemented so as to support automated analysis to provide an investigator with the information needed to decide whether manual analysis is required.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: Small red lights (one minute of arc or less) change colour appearance with positive defocus. We investigated the influence of longitudinal chromatic aberration and monochromatic aberrations on the colour appearance of small narrow band lights. Methods: Seven cyclopleged, trichromatic observers viewed a small light (one minute of arc, λmax = 510, 532, 550, 589, 620, 628 nm, approximately 19 per cent Weber contrast) centred within a black annulus (4.5 minutes of arc) and surrounded by a uniform white field (2,170 cd/m2). Pupil size was four millimetres. An optical trombone varied focus. Longitudinal chromatic aberration was controlled with a two component Powell achromatising lens that neutralises the eye’s chromatic aberration; a doublet that doubles and a triplet that reverses the eye’s chromatic aberration. Astigmatism and higher order monochromatic aberrations were corrected using adaptive optics. Results: Observers reported a change in appearance of the small red light (628 nm) without the Powell lens at +0.49 ± 0.21 D defocus and with the doublet at +0.62 ± 0.16 D. Appearance did not alter with the Powell lens, and five of seven observers reported the phenomenon with the triplet for negative defocus (-0.80 ± 0.47 D). Correction of aberrations did not significantly affect the magnitude at which the appearance of the red light changed (+0.44 ± 0.18 D without correction; +0.46 ± 0.16 D with correction). The change in colour appearance with defocus extended to other wavelengths (λmax = 510 to 620 nm), with directions of effects being reversed for short wavelengths relative to long wavelengths. Conclusions: Longitudinal chromatic aberrations but not monochromatic aberrations are involved in changing the appearance of small lights with defocus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: To investigate speed regulation during overground running on undulating terrain. Methods: Following an initial laboratory session to calculate physiological thresholds, eight experienced runners completed a spontaneously paced time trial over 3 laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Results: Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. 89% of group level speed was predicted using a modified gradient factor. Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Conclusions: Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Network Jamming systems provide real-time collaborative media performance experiences for novice or inexperienced users. In this paper we will outline the theoretical and developmental drivers for our Network Jamming software, called jam2jam. jam2jam employs generative algorithmic techniques with particular implications for accessibility and learning. We will describe how theories of engagement have directed the design and development of jam2jam and show how iterative testing cycles in numerous international sites have informed the evolution of the system and its educational potential. Generative media systems present an opportunity for users to leverage computational systems to make sense of complex media forms through interactive and collaborative experiences. Generative music and art are a relatively new phenomenon that use procedural invention as a creative technique to produce music and visual media. These kinds of systems present a range of affordances that can facilitate new kinds of relationships with music and media performance and production. Early systems have demonstrated the potential to provide access to collaborative ensemble experiences to users with little formal musical or artistic expertise.This presentation examines the educational affordances of these systems evidenced by field data drawn from the Network Jamming Project. These generative performance systems enable access to a unique kind of music/media’ ensemble performance with very little musical/ media knowledge or skill and they further offer the possibility of unique interactive relationships with artists and creative knowledge through collaborative performance. Through the process of observing, documenting and analysing young people interacting with the generative media software jam2jam a theory of meaningful engagement has emerged from the need to describe and codify how users experience creative engagement with music/media performance and the locations of meaning. In this research we observed that the musical metaphors and practices of ‘ensemble’ or collaborative performance and improvisation as a creative process for experienced musicians can be made available to novice users. The relational meanings of these musical practices afford access to high level personal, social and cultural experiences. Within the creative process of collaborative improvisation lie a series of modes of creative engagement that move from appreciation through exploration, selection, direction toward embodiment. The expressive sounds and visions made in real-time by improvisers collaborating are immediate and compelling. Generative media systems let novices access these experiences with simple interfaces that allow them to make highly professional and expressive sonic and visual content simply by using gestures and being attentive and perceptive to their collaborators. These kinds of experiences present the potential for highly complex expressive interactions with sound and media as a performance. Evidence that has emerged from this research suggest that collaborative performance with generative media is transformative and meaningful. In this presentation we draw out these ideas around an emerging theory of meaningful engagement that has evolved from the development of network jamming software. Primarily we focus on demonstrating how these experiences might lead to understandings that may be of educational and social benefit.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the emergence of multi-cores into the mainstream, there is a growing need for systems to allow programmers and automated systems to reason about data dependencies and inherent parallelismin imperative object-oriented languages. In this paper we exploit the structure of object-oriented programs to abstract computational side-effects. We capture and validate these effects using a static type system. We use these as the basis of sufficient conditions for several different data and task parallelism patterns. We compliment our static type system with a lightweight runtime system to allow for parallelization in the presence of complex data flows. We have a functioning compiler and worked examples to demonstrate the practicality of our solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ways in which the "traditional" tension between words and artwork can be perceived has huge implications for understanding the relationship between critical or theoretical interpretation, art and practice, and research. Within the practice-led PhD this can generate a strange sense of disjuncture for the artist-researcher particularly when engaged in writing the exegesis. This paper aims to explore this tension through an introductory investigation of the work of the philosopher Andrew Benjamin. For Benjamin criticism completes the work of art. Criticism is, with the artwork, at the centre of our experience and theoretical understanding of art – in this way the work of art and criticism are co-productive. The reality of this co-productivity can be seen in three related articles on the work of American painter Marcia Hafif. In each of these articles there are critical negotiations of just how the work of art operates as art and theoretically, within the field of art. This focus has important ramifications for the writing and reading of the exegesis within the practice-led research higher degree. By including art as a significant part of the research reporting process the artist-researcher is also staking a claim as to the critical value of their work. Rather than resisting the tension between word and artwork the practice-led artist-researcher need to embrace the co-productive nature of critical word and creative work to more completely articulate their practice and its significance as research. The ideal venue and opportunity for this is the exegesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: The component modules in the standard BEAMnrc distribution may appear to be insufficient to model micro-multileaf collimators that have tri-faceted leaf ends and complex leaf profiles. This note indicates, however, that accurate Monte Carlo simulations of radiotherapy beams defined by a complex collimation device can be completed using BEAMnrc's standard VARMLC component module.---------- Methods: That this simple collimator model can produce spatially and dosimetrically accurate micro-collimated fields is illustrated using comparisons with ion chamber and film measurements of the dose deposited by square and irregular fields incident on planar, homogeneous water phantoms.---------- Results: Monte Carlo dose calculations for on- and off-axis fields are shown to produce good agreement with experimental values, even upon close examination of the penumbrae.--------- Conclusions: The use of a VARMLC model of the micro-multileaf collimator, along with a commissioned model of the associated linear accelerator, is therefore recommended as an alternative to the development or use of in-house or third-party component modules for simulating stereotactic radiotherapy and radiosurgery treatments. Simulation parameters for the VARMLC model are provided which should allow other researchers to adapt and use this model to study clinical stereotactic radiotherapy treatments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The current epidemic of paediatric obesity is consistent with a myriad of health-related comorbid conditions. Despite the higher prevalence of orthopaedic conditions in overweight children, a paucity of published research has considered the influence of these conditions on the ability to undertake physical activity. As physical activity participation is directly related to improvements in physical fitness, skeletal health and metabolic conditions, higher levels of physical activity are encouraged, and exercise is commonly prescribed in the treatment and management of childhood obesity. However, research has not correlated orthopaedic conditions, including the increased joint pain and discomfort that is commonly reported by overweight children, with decreases in physical activity. Research has confirmed that overweight children typically display a slower, more tentative walking pattern with increased forces to the hip, knee and ankle during 'normal' gait. This research, combined with anthropometric data indicating a higher prevalence of musculoskeletal malalignment in overweight children, suggests that such individuals are poorly equipped to undertake certain forms of physical activity. Concomitant increases in obesity and decreases in physical activity level strongly support the need to better understand the musculoskeletal factors associated with the performance of motor tasks by overweight and obese children.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While my PhD is practice-led research, it is my contention that such an inquiry cannot develop as long as it tries to emulate other models of research. I assert that practice-led research needs to account for an epistemological unknown or uncertainty central to the practice of art. By focusing on what I call the artist's 'voice,' I will show how this 'voice' is comprised of a dual motivation—'articulate' representation and 'inarticulate' affect—which do not even necessarily derive from the artist. Through an analysis of art-historical precedents, critical literature (the work of Jean-François Lyotard and Andrew Benjamin, the critical methods of philosophy, phenomenology and psychoanalysis) as well as of my own painting and digital arts practice, I aim to demonstrate how this unknown or uncertain aspect of artistic inquiry can be mapped. It is my contention that practice-led research needs to address and account for this dualistic 'voice' in order to more comprehensively articulate its unique contribution to research culture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Climate change is an urgent global public health issue with substantial predicted impacts in the coming decades. Concurrently, global burden of disease studies highlight problems such as obesity, mental health problems and a range of other chronic diseases, many of which have origins in childhood. There is a unique opportunity to engage children in both health promotion and education for sustainability during their school years to help ameliorate both environmental and health issues. Evidence exists for the most effective ways to do this, through education that is empowering, action orientated and relevant to children’s day to day interests and concerns, and by tailoring such education to different educational sectors. The aim of this discussion paper is to argue the case for sustainability education in schools that links with health promotion and that adopts a practical approach to engaging children in these important public health and environmental issues. We describe two internationally implemented whole-school reform movements, Health Promoting Schools (HPS) and Sustainable Schools (SS) which seek to operationalise transformative educational processes. Drawing on international evidence and Australian case examples, we contend that children’s active involvement in such processes is not only educationally engaging and rewarding, it also contributes to human and environmental resilience and health. Further, school settings can play an important ecological public health role, incubating and amplifying the socially transformative changes urgently required to create pathways to healthy, just and sustainable human futures, on a viable planet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Much research has investigated the differences between option implied volatilities and econometric model-based forecasts. Implied volatility is a market determined forecast, in contrast to model-based forecasts that employ some degree of smoothing of past volatility to generate forecasts. Implied volatility has the potential to reflect information that a model-based forecast could not. This paper considers two issues relating to the informational content of the S&P 500 VIX implied volatility index. First, whether it subsumes information on how historical jump activity contributed to the price volatility, followed by whether the VIX reflects any incremental information pertaining to future jump activity relative to model-based forecasts. It is found that the VIX index both subsumes information relating to past jump contributions to total volatility and reflects incremental information pertaining to future jump activity. This issue has not been examined previously and expands our understanding of how option markets form their volatility forecasts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite all attempts to prevent fraud, it continues to be a major threat to industry and government. Traditionally, organizations have focused on fraud prevention rather than detection, to combat fraud. In this paper we present a role mining inspired approach to represent user behaviour in Enterprise Resource Planning (ERP) systems, primarily aimed at detecting opportunities to commit fraud or potentially suspicious activities. We have adapted an approach which uses set theory to create transaction profiles based on analysis of user activity records. Based on these transaction profiles, we propose a set of (1) anomaly types to detect potentially suspicious user behaviour, and (2) scenarios to identify inadequate segregation of duties in an ERP environment. In addition, we present two algorithms to construct a directed acyclic graph to represent relationships between transaction profiles. Experiments were conducted using a real dataset obtained from a teaching environment and a demonstration dataset, both using SAP R/3, presently the predominant ERP system. The results of this empirical research demonstrate the effectiveness of the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ERP systems generally implement controls to prevent certain common kinds of fraud. In addition however, there is an imperative need for detection of more sophisticated patterns of fraudulent activity as evidenced by the legal requirement for company audits and the common incidence of fraud. This paper describes the design and implementation of a framework for detecting patterns of fraudulent activity in ERP systems. We include the description of six fraud scenarios and the process of specifying and detecting the occurrence of those scenarios in ERP user log data using the prototype software which we have developed. The test results for detecting these scenarios in log data have been verified and confirm the success of our approach which can be generalized to ERP systems in general.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Virtual 3D models of long bones are increasingly being used for implant design and research applications. The current gold standard for the acquisition of such data is Computed Tomography (CT) scanning. Due to radiation exposure, CT is generally limited to the imaging of clinical cases and cadaver specimens. Magnetic Resonance Imaging (MRI) does not involve ionising radiation and therefore can be used to image selected healthy human volunteers for research purposes. The feasibility of MRI as alternative to CT for the acquisition of morphological bone data of the lower extremity has been demonstrated in recent studies [1, 2]. Some of the current limitations of MRI are long scanning times and difficulties with image segmentation in certain anatomical regions due to poor contrast between bone and surrounding muscle tissues. Higher field strength scanners promise to offer faster imaging times or better image quality. In this study image quality at 1.5T is quantitatively compared to images acquired at 3T. --------- The femora of five human volunteers were scanned using 1.5T and 3T MRI scanners from the same manufacturer (Siemens) with similar imaging protocols. A 3D flash sequence was used with TE = 4.66 ms, flip angle = 15° and voxel size = 0.5 × 0.5 × 1 mm. PA-Matrix and body matrix coils were used to cover the lower limb and pelvis respectively. Signal to noise ratio (SNR) [3] and contrast to noise ratio (CNR) [3] of the axial images from the proximal, shaft and distal regions were used to assess the quality of images from the 1.5T and 3T scanners. The SNR was calculated for the muscle and bone-marrow in the axial images. The CNR was calculated for the muscle to cortex and cortex to bone marrow interfaces, respectively. --------- Preliminary results (one volunteer) show that the SNR of muscle for the shaft and distal regions was higher in 3T images (11.65 and 17.60) than 1.5T images (8.12 and 8.11). For the proximal region the SNR of muscles was higher in 1.5T images (7.52) than 3T images (6.78). The SNR of bone marrow was slightly higher in 1.5T images for both proximal and shaft regions, while it was lower in the distal region compared to 3T images. The CNR between muscle and bone of all three regions was higher in 3T images (4.14, 6.55 and 12.99) than in 1.5T images (2.49, 3.25 and 9.89). The CNR between bone-marrow and bone was slightly higher in 1.5T images (4.87, 12.89 and 10.07) compared to 3T images (3.74, 10.83 and 10.15). These results show that the 3T images generated higher contrast between bone and the muscle tissue than the 1.5T images. It is expected that this improvement of image contrast will significantly reduce the time required for the mainly manual segmentation of the MR images. Future work will focus on optimizing the 3T imaging protocol for reducing chemical shift and susceptibility artifacts.