975 resultados para EVEN-EVEN NYLONS
Long-term exposure to gaseous air pollutants and cardio-respiratory mortality in Brisbane, Australia
Resumo:
Air pollution is ranked by the World Health Organisation as one of the top ten contributors to the global burden of disease and injury. Exposure to gaseous air pollutants, even at a low level, has been associated with cardiorespiratory diseases (Vedal, Brauer et al. 2003). Most recent epidemiological studies of air pollution have used time-series analyses to explore the relationship between daily mortality or morbidity and daily ambient air pollution concentrations based on the same day or previous days (Hajat, Armstrong et al. 2007). However, most of the previous studies have examined the association between air pollution and health outcomes using air pollution data from a single monitoring site or average values from a few monitoring sites to represent the whole population of the study area. In fact, for a metropolitan city, ambient air pollution levels may differ significantly among the different areas. There is increasing concern that the relationships between air pollution and mortality may vary with geographical area (Chen, Mengersen et al. 2007). Additionally, some studies have indicated that socio-economic status can act as a confounder when investigating the relation between geographical location and health (Scoggins, Kjellstrom et al. 2004). This study examined the spatial variation in the relationship between long-term exposure to gaseous air pollutants (including nitrogen dioxide (NO2), ozone (O3) and sulphur dioxide (SO2)), and cardiorespiratory mortality in Brisbane, Australia, during the period 1996 - 2004.
Resumo:
Aberrations affect image quality of the eye away from the line of sight as well as along it. High amounts of lower order aberrations are found in the peripheral visual field and higher order aberrations change away from the centre of the visual field. Peripheral resolution is poorer than that in central vision, but peripheral vision is important for movement and detection tasks (for example driving) which are adversely affected by poor peripheral image quality. Any physiological process or intervention that affects axial image quality will affect peripheral image quality as well. The aim of this study was to investigate the effects of accommodation, myopia, age, and refractive interventions of orthokeratology, laser in situ keratomileusis and intraocular lens implantation on the peripheral aberrations of the eye. This is the first systematic investigation of peripheral aberrations in a variety of subject groups. Peripheral aberrations can be measured either by rotating a measuring instrument relative to the eye or rotating the eye relative to the instrument. I used the latter as it is much easier to do. To rule out effects of eye rotation on peripheral aberrations, I investigated the effects of eye rotation on axial and peripheral cycloplegic refraction using an open field autorefractor. For axial refraction, the subjects fixated at a target straight ahead, while their heads were rotated by ±30º with a compensatory eye rotation to view the target. For peripheral refraction, the subjects rotated their eyes to fixate on targets out to ±34° along the horizontal visual field, followed by measurements in which they rotated their heads such that the eyes stayed in the primary position relative to the head while fixating at the peripheral targets. Oblique viewing did not affect axial or peripheral refraction. Therefore it is not critical, within the range of viewing angles studied, if axial and peripheral refractions are measured with rotation of the eye relative to the instrument or rotation of the instrument relative to the eye. Peripheral aberrations were measured using a commercial Hartmann-Shack aberrometer. A number of hardware and software changes were made. The 1.4 mm range limiting aperture was replaced by a larger aperture (2.5 mm) to ensure all the light from peripheral parts of the pupil reached the instrument detector even when aberrations were high such as those occur in peripheral vision. The power of the super luminescent diode source was increased to improve detection of spots passing through the peripheral pupil. A beam splitter was placed between the subjects and the aberrometer, through which they viewed an array of targets on a wall or projected on a screen in a 6 row x 7 column matrix of points covering a visual field of 42 x 32. In peripheral vision, the pupil of the eye appears elliptical rather than circular; data were analysed off-line using custom software to determine peripheral aberrations. All analyses in the study were conducted for 5.0 mm pupils. Influence of accommodation on peripheral aberrations was investigated in young emmetropic subjects by presenting fixation targets at 25 cm and 3 m (4.0 D and 0.3 D accommodative demands, respectively). Increase in accommodation did not affect the patterns of any aberrations across the field, but there was overall negative shift in spherical aberration across the visual field of 0.10 ± 0.01m. Subsequent studies were conducted with the targets at a 1.2 m distance. Young emmetropes, young myopes and older emmetropes exhibited similar patterns of astigmatism and coma across the visual field. However, the rate of change of coma across the field was higher in young myopes than young emmetropes and was highest in older emmetropes amongst the three groups. Spherical aberration showed an overall decrease in myopes and increase in older emmetropes across the field, as compared to young emmetropes. Orthokeratology, spherical IOL implantation and LASIK altered peripheral higher order aberrations considerably, especially spherical aberration. Spherical IOL implantation resulted in an overall increase in spherical aberration across the field. Orthokeratology and LASIK reversed the direction of change in coma across the field. Orthokeratology corrected peripheral relative hypermetropia through correcting myopia in the central visual field. Theoretical ray tracing demonstrated that changes in aberrations due to orthokeratology and LASIK can be explained by the induced changes in radius of curvature and asphericity of the cornea. This investigation has shown that peripheral aberrations can be measured with reasonable accuracy with eye rotation relative to the instrument. Peripheral aberrations are affected by accommodation, myopia, age, orthokeratology, spherical intraocular lens implantation and laser in situ keratomileusis. These factors affect the magnitudes and patterns of most aberrations considerably (especially coma and spherical aberration) across the studied visual field. The changes in aberrations across the field may influence peripheral detection and motion perception. However, further research is required to investigate how the changes in aberrations influence peripheral detection and motion perception and consequently peripheral vision task performance.
Resumo:
The inquiry documented in this thesis is located at the nexus of technological innovation and traditional schooling. As we enter the second decade of a new century, few would argue against the increasingly urgent need to integrate digital literacies with traditional academic knowledge. Yet, despite substantial investments from governments and businesses, the adoption and diffusion of contemporary digital tools in formal schooling remain sluggish. To date, research on technology adoption in schools tends to take a deficit perspective of schools and teachers, with the lack of resources and teacher ‘technophobia’ most commonly cited as barriers to digital uptake. Corresponding interventions that focus on increasing funding and upskilling teachers, however, have made little difference to adoption trends in the last decade. Empirical evidence that explicates the cultural and pedagogical complexities of innovation diffusion within long-established conventions of mainstream schooling, particularly from the standpoint of students, is wanting. To address this knowledge gap, this thesis inquires into how students evaluate and account for the constraints and affordances of contemporary digital tools when they engage with them as part of their conventional schooling. It documents the attempted integration of a student-led Web 2.0 learning initiative, known as the Student Media Centre (SMC), into the schooling practices of a long-established, high-performing independent senior boys’ school in urban Australia. The study employed an ‘explanatory’ two-phase research design (Creswell, 2003) that combined complementary quantitative and qualitative methods to achieve both breadth of measurement and richness of characterisation. In the initial quantitative phase, a self-reported questionnaire was administered to the senior school student population to determine adoption trends and predictors of SMC usage (N=481). Measurement constructs included individual learning dispositions (learning and performance goals, cognitive playfulness and personal innovativeness), as well as social and technological variables (peer support, perceived usefulness and ease of use). Incremental predictive models of SMC usage were conducted using Classification and Regression Tree (CART) modelling: (i) individual-level predictors, (ii) individual and social predictors, and (iii) individual, social and technological predictors. Peer support emerged as the best predictor of SMC usage. Other salient predictors include perceived ease of use and usefulness, cognitive playfulness and learning goals. On the whole, an overwhelming proportion of students reported low usage levels, low perceived usefulness and a lack of peer support for engaging with the digital learning initiative. The small minority of frequent users reported having high levels of peer support and robust learning goal orientations, rather than being predominantly driven by performance goals. These findings indicate that tensions around social validation, digital learning and academic performance pressures influence students’ engagement with the Web 2.0 learning initiative. The qualitative phase that followed provided insights into these tensions by shifting the analytics from individual attitudes and behaviours to shared social and cultural reasoning practices that explain students’ engagement with the innovation. Six indepth focus groups, comprising 60 students with different levels of SMC usage, were conducted, audio-recorded and transcribed. Textual data were analysed using Membership Categorisation Analysis. Students’ accounts converged around a key proposition. The Web 2.0 learning initiative was useful-in-principle but useless-in-practice. While students endorsed the usefulness of the SMC for enhancing multimodal engagement, extending peer-topeer networks and acquiring real-world skills, they also called attention to a number of constraints that obfuscated the realisation of these design affordances in practice. These constraints were cast in terms of three binary formulations of social and cultural imperatives at play within the school: (i) ‘cool/uncool’, (ii) ‘dominant staff/compliant student’, and (iii) ‘digital learning/academic performance’. The first formulation foregrounds the social stigma of the SMC among peers and its resultant lack of positive network benefits. The second relates to students’ perception of the school culture as authoritarian and punitive with adverse effects on the very student agency required to drive the innovation. The third points to academic performance pressures in a crowded curriculum with tight timelines. Taken together, findings from both phases of the study provide the following key insights. First, students endorsed the learning affordances of contemporary digital tools such as the SMC for enhancing their current schooling practices. For the majority of students, however, these learning affordances were overshadowed by the performative demands of schooling, both social and academic. The student participants saw engagement with the SMC in-school as distinct from, even oppositional to, the conventional social and academic performance indicators of schooling, namely (i) being ‘cool’ (or at least ‘not uncool’), (ii) sufficiently ‘compliant’, and (iii) achieving good academic grades. Their reasoned response therefore, was simply to resist engagement with the digital learning innovation. Second, a small minority of students seemed dispositionally inclined to negotiate the learning affordances and performance constraints of digital learning and traditional schooling more effectively than others. These students were able to engage more frequently and meaningfully with the SMC in school. Their ability to adapt and traverse seemingly incommensurate social and institutional identities and norms is theorised as cultural agility – a dispositional construct that comprises personal innovativeness, cognitive playfulness and learning goals orientation. The logic then is ‘both and’ rather than ‘either or’ for these individuals with a capacity to accommodate both learning and performance in school, whether in terms of digital engagement and academic excellence, or successful brokerage across multiple social identities and institutional affiliations within the school. In sum, this study takes us beyond the familiar terrain of deficit discourses that tend to blame institutional conservatism, lack of resourcing and teacher resistance for low uptake of digital technologies in schools. It does so by providing an empirical base for the development of a ‘third way’ of theorising technological and pedagogical innovation in schools, one which is more informed by students as critical stakeholders and thus more relevant to the lived culture within the school, and its complex relationship to students’ lives outside of school. It is in this relationship that we find an explanation for how these individuals can, at the one time, be digital kids and analogue students.
Resumo:
Survey-based health research is in a boom phase following an increased amount of health spending in OECD countries and the interest in ageing. A general characteristic of survey-based health research is its diversity. Different studies are based on different health questions in different datasets; they use different statistical techniques; they differ in whether they approach health from an ordinal or cardinal perspective; and they differ in whether they measure short-term or long-term effects. The question in this paper is simple: do these differences matter for the findings? We investigate the effects of life-style choices (drinking, smoking, exercise) and income on six measures of health in the US Health and Retirement Study (HRS) between 1992 and 2002: (1) self-assessed general health status, (2) problems with undertaking daily tasks and chores, (3) mental health indicators, (4) BMI, (5) the presence of serious long-term health conditions, and (6) mortality. We compare ordinal models with cardinal models; we compare models with fixed effects to models without fixed-effects; and we compare short-term effects to long-term effects. We find considerable variation in the impact of different determinants on our chosen health outcome measures; we find that it matters whether ordinality or cardinality is assumed; we find substantial differences between estimates that account for fixed effects versus those that do not; and we find that short-run and long-run effects differ greatly. All this implies that health is an even more complicated notion than hitherto thought, defying generalizations from one measure to the others or one methodology to another.
Resumo:
This article rebuts the still-common assumption that managers of capitalist entities have a duty, principally or even exclusively, to maximise the monetary return to investors on their investments. It argues that this view is based on a misleadingly simplistic conception of human values and motivation. Not only is acting solely to maximise long-term shareholder value difficult, it displays, at best, banal single-mindedness and, at worst, sociopathy. In fact, real investors and managers have rich constellations of values that should be taken account of in all their decisions, including their business decisions. Awareness of our values, and public expression of our commitment to exemplify them, make for healthier investment and, in the long term, a healthier corporate world. Individuals and funds investing on the basis of such values, in companies that express their own, display humanity rather than pathology. Preamble I always enjoyed the discussions that Michael Whincop and I had about the interaction of ethics and economics. Each of us could see an important role for these disciplines, as well as our common discipline of law. We also shared an appreciation of the institutional context within which much of the drama of life is played out. In understanding the behaviour of individuals and the choices they make, it seemed axiomatic to each of us that ethics and economics have a lot to say. This was also true of the institutions in which they operate. Michael ·had a strong interest in 'the new institutional economics' I and I had a strong interest in 'institutionalising ethics' right through the 1990s.' This formed the basis of some fascinating and fruitful discussions. Professor Charles Sampford is Director, Key Centre for Ethics, Law, Justice and Governance, Foundation Professor of Law at Griffith University and President, International Institute for Public Ethics.DrVirginia Berry is a Research Fellow at theKey Centre for Ethics, Law,Justice andGovernance, Griffith University. Oliver Williamson, one of the leading proponents of the 'new institutional economics', published a number of influential works - see Williamson (1975, 1995,1996). Sampford (1991),' pp 185-222. The primary focus of discussions on institutionalising ethics has been in public sectorethics: see, for example, Preston and Sampford (2002); Sampford (1994), pp 114-38. Some discussion has, however, moved beyond the public sector to include business - see Sampford 200408299
Resumo:
The New Zealand creative sector was responsible for almost 121,000 jobs at the time of the 2006 Census (6.3% of total employment). These are divided between • 35,751 creative specialists – persons employed doing creative work in creative industries • 42,300 support workers - persons providing management and support services in creative industries • 42,792 embedded creative workers – persons engaged in creative work in other types of enterprise The most striking feature of this breakdown is the fact that the largest group of creative workers are employed outside the creative industries, i.e. in other types of businesses. Even within the creative industries, there are fewer people directly engaged in creative work than in providing management and support. Creative sector employees earned incomes of approximately $52,000 per annum at the time of the 2006 Census. This is relatively uniform across all three types of creative worker, and is significantly above the average for all employed persons (of approximately $40,700). Creative employment and incomes were growing strongly over both five year periods between the 1996, 2001 and 2006 Censuses. However, when we compare creative and general trends, we see two distinct phases in the development of the creative sector: • rapid structural growth over the five years to 2001 (especially led by developments in ICT), with creative employment and incomes increasing rapidly at a time when they were growing modestly across the whole economy; • subsequent consolidation, with growth driven by more by national economic expansion than structural change, and creative employment and incomes moving in parallel with strong economy-wide growth. Other important trends revealed by the data are that • the strongest growth during the decade was in embedded creative workers, especially over the first five years. The weakest growth was in creative specialists, with support workers in creative industries in the middle rank, • by far the strongest growth in creative industries’ employment was in Software & digital content, which trebled in size over the decade Comparing New Zealand with the United Kingdom and Australia, the two southern hemisphere nations have significantly lower proportions of total employment in the creative sector (both in creative industries and embedded employment). New Zealand’s and Australia’s creative shares in 2001 were similar (5.4% each), but in the following five years, our share has expanded (to 5.7%) whereas Australia’s fell slightly (to 5.2%) – in both cases, through changes in creative industries’ employment. The creative industries generated $10.5 billion in total gross output in the March 2006 year. Resulting from this was value added totalling $5.1b, representing 3.3% of New Zealand’s total GDP. Overall, value added in the creative industries represents 49% of industry gross output, which is higher than the average across the whole economy, 45%. This is a reflection of the relatively high labour intensity and high earnings of the creative industries. Industries which have an above-average ratio of value added to gross output are usually labour-intensive, especially when wages and salaries are above average. This is true for Software & Digital Content and Architecture, Design & Visual Arts, with ratios of 60.4% and 55.2% respectively. However there is significant variation in this ratio between different parts of the creative industries, with some parts (e.g. Software & Digital Content and Architecture, Design & Visual Arts) generating even higher value added relative to output, and others (e.g. TV & Radio, Publishing and Music & Performing Arts) less, because of high capital intensity and import content. When we take into account the impact of the creative industries’ demand for goods and services from its suppliers and consumption spending from incomes earned, we estimate that there is an addition to economic activity of: • $30.9 billion in gross output, $41.4b in total • $15.1b in value added, $20.3b in total • 158,100 people employed, 234,600 in total The total economic impact of the creative industries is approximately four times their direct output and value added, and three times their direct employment. Their effect on output and value added is roughly in line with the average over all industries, although the effect on employment is significantly lower. This is because of the relatively high labour intensity (and high earnings) of the creative industries, which generate below-average demand from suppliers, but normal levels of demand though expenditure from incomes. Drawing on these numbers and conclusions, we suggest some (slightly speculative) directions for future research. The goal is to better understand the contribution the creative sector makes to productivity growth; in particular, the distinctive contributions from creative firms and embedded creative workers. The ideas for future research can be organised into the several categories: • Understanding the categories of the creative sector– who is doing the business? In other words, examine via more fine grained research (at a firm level perhaps) just what is the creative contribution from the different aspects of the creative sector industries. It may be possible to categorise these in terms of more or less striking innovations. • Investigate the relationship between the characteristics and the performance of the various creative industries/ sectors; • Look more closely at innovation at an industry level e.g. using an index of relative growth of exports, and see if this can be related to intensity of use of creative inputs; • Undertake case studies of the creative sector; • Undertake case studies of the embedded contribution to growth in the firms and industries that employ them, by examining taking several high performing noncreative industries (in the same way as proposed for the creative sector). • Look at the aggregates – drawing on the broad picture of the extent of the numbers of creative workers embedded within the different industries, consider the extent to which these might explain aspects of the industries’ varied performance in terms of exports, growth and so on. • This might be able to extended to examine issues like the type of creative workers that are most effective when embedded, or test the hypothesis that each industry has its own particular requirements for embedded creative workers that overwhelms any generic contributions from say design, or IT.
Resumo:
This paper shows how the power quality can be improved in a microgrid that is supplying a nonlinear and unbalanced load. The microgrid contains a hybrid combination of inertial and converter interfaced distributed generation units where a decentralized power sharing algorithm is used to control its power management. One of the distributed generators in the microgrid is used as a power quality compensator for the unbalanced and harmonic load. The current reference generation for power quality improvement takes into account the active and reactive power to be supplied by the micro source which is connected to the compensator. Depending on the power requirement of the nonlinear load, the proposed control scheme can change modes of operation without any external communication interfaces. The compensator can operate in two modes depending on the entire power demand of the unbalanced nonlinear load. The proposed control scheme can even compensate system unbalance caused by the single-phase micro sources and load changes. The efficacy of the proposed power quality improvement control and method in such a microgrid is validated through extensive simulation studies using PSCAD/EMTDC software with detailed dynamic models of the micro sources and power electronic converters
Resumo:
An algorithm based on the concept of Kalman filtering is proposed in this paper for the estimation of power system signal attributes, like amplitude, frequency and phase angle. This technique can be used in protection relays, digital AVRs, DSTATCOMs, FACTS and other power electronics applications. Furthermore this algorithm is particularly suitable for the integration of distributed generation sources to power grids when fast and accurate detection of small variations of signal attributes are needed. Practical considerations such as the effect of noise, higher order harmonics, and computational issues of the algorithm are considered and tested in the paper. Several computer simulations are presented to highlight the usefulness of the proposed approach. Simulation results show that the proposed technique can simultaneously estimate the signal attributes, even if it is highly distorted due to the presence of non-linear loads and noise.
Resumo:
This paper presents an analysis of phasor measurement method for tracking the fundamental power frequency to show if it has the performance necessary to cope with the requirements of power system protection and control. In this regard, several computer simulations presenting the conditions of a typical power system signal especially those highly distorted by harmonics, noise and offset, are provided to evaluate the response of the Phasor Measurement (PM) technique. A new method, which can shorten the delay of estimation, has also been proposed for the PM method to work for signals free of even-order harmonics.
Resumo:
The eyelids play an important role in lubricating and protecting the surface of the eye. Each blink serves to spread fresh tears, remove debris and replenish the smooth optical surface of the eye. Yet little is known about how the eyelids contact the ocular surface and what pressure distribution exists between the eyelids and cornea. As the principal refractive component of the eye, the cornea is a major element of the eye’s optics. The optical properties of the cornea are known to be susceptible to the pressure exerted by the eyelids. Abnormal eyelids, due to disease, have altered pressure on the ocular surface due to changes in the shape, thickness or position of the eyelids. Normal eyelids also cause corneal distortions that are most often noticed when they are resting closer to the corneal centre (for example during reading). There were many reports of monocular diplopia after reading due to corneal distortion, but prior to videokeratoscopes these localised changes could not be measured. This thesis has measured the influence of eyelid pressure on the cornea after short-term near tasks and techniques were developed to quantify eyelid pressure and its distribution. The profile of the wave-like eyelid-induced corneal changes and the refractive effects of these distortions were investigated. Corneal topography changes due to both the upper and lower eyelids were measured for four tasks involving two angles of vertical downward gaze (20° and 40°) and two near work tasks (reading and steady fixation). After examining the depth and shape of the corneal changes, conclusions were reached regarding the magnitude and distribution of upper and lower eyelid pressure for these task conditions. The degree of downward gaze appears to alter the upper eyelid pressure on the cornea, with deeper changes occurring after greater angles of downward gaze. Although the lower eyelid was further from the corneal centre in large angles of downward gaze, its effect on the cornea was greater than that of the upper eyelid. Eyelid tilt, curvature, and position were found to be influential in the magnitude of eyelid-induced corneal changes. Refractively these corneal changes are clinically and optically significant with mean spherical and astigmatic changes of about 0.25 D after only 15 minutes of downward gaze (40° reading and steady fixation conditions). Due to the magnitude of these changes, eyelid pressure in downward gaze offers a possible explanation for some of the day-to-day variation observed in refraction. Considering the magnitude of these changes and previous work on their regression, it is recommended that sustained tasks performed in downward gaze should be avoided for at least 30 minutes before corneal and refractive assessment requiring high accuracy. Novel procedures were developed to use a thin (0.17 mm) tactile piezoresistive pressure sensor mounted on a rigid contact lens to measure eyelid pressure. A hydrostatic calibration system was constructed to convert raw digital output of the sensors to actual pressure units. Conditioning the sensor prior to use regulated the measurement response and sensor output was found to stabilise about 10 seconds after loading. The influences of various external factors on sensor output were studied. While the sensor output drifted slightly over several hours, it was not significant over the measurement time of 30 seconds used for eyelid pressure, as long as the length of the calibration and measurement recordings were matched. The error associated with calibrating at room temperature but measuring at ocular surface temperature led to a very small overestimation of pressure. To optimally position the sensor-contact lens combination under the eyelid margin, an in vivo measurement apparatus was constructed. Using this system, eyelid pressure increases were observed when the upper eyelid was placed on the sensor and a significant increase was apparent when the eyelid pressure was increased by pulling the upper eyelid tighter against the eye. For a group of young adult subjects, upper eyelid pressure was measured using this piezoresistive sensor system. Three models of contact between the eyelid and ocular surface were used to calibrate the pressure readings. The first model assumed contact between the eyelid and pressure sensor over more than the pressure cell width of 1.14 mm. Using thin pressure sensitive carbon paper placed under the eyelid, a contact imprint was measured and this width used for the second model of contact. Lastly as Marx’s line has been implicated as the region of contact with the ocular surface, its width was measured and used as the region of contact for the third model. The mean eyelid pressures calculated using these three models for the group of young subjects were 3.8 ± 0.7 mmHg (whole cell), 8.0 ± 3.4 mmHg (imprint width) and 55 ± 26 mmHg (Marx’s line). The carbon imprints using Pressurex-micro confirmed previous suggestions that a band of the eyelid margin has primary contact with the ocular surface and provided the best estimate of the contact region and hence eyelid pressure. Although it is difficult to directly compare the results with previous eyelid pressure measurement attempts, the eyelid pressure calculated using this model was slightly higher than previous manometer measurements but showed good agreement with the eyelid force estimated using an eyelid tensiometer. The work described in this thesis has shown that the eyelids have a significant influence on corneal shape, even after short-term tasks (15 minutes). Instrumentation was developed using piezoresistive sensors to measure eyelid pressure. Measurements for the upper eyelid combined with estimates of the contact region between the cornea and the eyelid enabled quantification of the upper eyelid pressure for a group of young adult subjects. These techniques will allow further investigation of the interaction between the eyelids and the surface of the eye.
Resumo:
Unmanned Aerial Vehicles (UAVs) are emerging as an ideal platform for a wide range of civil applications such as disaster monitoring, atmospheric observation and outback delivery. However, the operation of UAVs is currently restricted to specially segregated regions of airspace outside of the National Airspace System (NAS). Mission Flight Planning (MFP) is an integral part of UAV operation that addresses some of the requirements (such as safety and the rules of the air) of integrating UAVs in the NAS. Automated MFP is a key enabler for a number of UAV operating scenarios as it aids in increasing the level of onboard autonomy. For example, onboard MFP is required to ensure continued conformance with the NAS integration requirements when there is an outage in the communications link. MFP is a motion planning task concerned with finding a path between a designated start waypoint and goal waypoint. This path is described with a sequence of 4 Dimensional (4D) waypoints (three spatial and one time dimension) or equivalently with a sequence of trajectory segments (or tracks). It is necessary to consider the time dimension as the UAV operates in a dynamic environment. Existing methods for generic motion planning, UAV motion planning and general vehicle motion planning cannot adequately address the requirements of MFP. The flight plan needs to optimise for multiple decision objectives including mission safety objectives, the rules of the air and mission efficiency objectives. Online (in-flight) replanning capability is needed as the UAV operates in a large, dynamic and uncertain outdoor environment. This thesis derives a multi-objective 4D search algorithm entitled Multi- Step A* (MSA*) based on the seminal A* search algorithm. MSA* is proven to find the optimal (least cost) path given a variable successor operator (which enables arbitrary track angle and track velocity resolution). Furthermore, it is shown to be of comparable complexity to multi-objective, vector neighbourhood based A* (Vector A*, an extension of A*). A variable successor operator enables the imposition of a multi-resolution lattice structure on the search space (which results in fewer search nodes). Unlike cell decomposition based methods, soundness is guaranteed with multi-resolution MSA*. MSA* is demonstrated through Monte Carlo simulations to be computationally efficient. It is shown that multi-resolution, lattice based MSA* finds paths of equivalent cost (less than 0.5% difference) to Vector A* (the benchmark) in a third of the computation time (on average). This is the first contribution of the research. The second contribution is the discovery of the additive consistency property for planning with multiple decision objectives. Additive consistency ensures that the planner is not biased (which results in a suboptimal path) by ensuring that the cost of traversing a track using one step equals that of traversing the same track using multiple steps. MSA* mitigates uncertainty through online replanning, Multi-Criteria Decision Making (MCDM) and tolerance. Each trajectory segment is modeled with a cell sequence that completely encloses the trajectory segment. The tolerance, measured as the minimum distance between the track and cell boundaries, is the third major contribution. Even though MSA* is demonstrated for UAV MFP, it is extensible to other 4D vehicle motion planning applications. Finally, the research proposes a self-scheduling replanning architecture for MFP. This architecture replicates the decision strategies of human experts to meet the time constraints of online replanning. Based on a feedback loop, the proposed architecture switches between fast, near-optimal planning and optimal planning to minimise the need for hold manoeuvres. The derived MFP framework is original and shown, through extensive verification and validation, to satisfy the requirements of UAV MFP. As MFP is an enabling factor for operation of UAVs in the NAS, the presented work is both original and significant.
Resumo:
Purpose: Students with low vision may be disadvantaged when compared with their normally sighted peers, as they frequently work at very short working distances and need to use low vision devices. The aim of this study was to examine the sustained reading rates of students with low vision and compare them with their peers with normal vision. The effects of visual acuity, acuity reserve and age on reading rate were also examined. Method: Fifty-six students (10 to 16 years of age), 26 with low vision and 30 with normal vision were required to read text continuously for 30 minutes. Their position in the text was recorded at two-minute intervals. Distance and near visual acuity, working distance, cause of low vision, reading rates and reading habits were recorded. Results: A total of 80.7 per cent of the students with low vision maintained a constant reading rate during the 30 minutes of reading, although they read at approximately half the rate (104 wpm) compared with their normally sighted peers (195 wpm). Only four of the low vision subjects could not complete the reading task. Reading rates increased significantly with acuity reserve and distance and near visual acuity but there was no significant relationship between age and sustained reading rate. Conclusions: The majority of students with low vision were able to maintain appropriate reading rates to cope in integrated educational settings. Surprisingly only relatively few subjects (16 per cent) used their prescribed low vision devices even though the average accommodative demand was 9 D and generally, they revealed a greater dislike of reading compared to students with normal vision.
Resumo:
Purpose: The cornea is known to be susceptible to forces exerted by eyelids. There have been previous attempts to quantify eyelid pressure but the reliability of the results is unclear. The purpose of this study was to develop a technique using piezoresistive pressure sensors to measure upper eyelid pressure on the cornea. Methods: The technique was based on the use of thin (0.18 mm) tactile piezoresistive pressure sensors, which generate a signal related to the applied pressure. A range of factors that influence the response of this pressure sensor were investigated along with the optimal method of placing the sensor in the eye. Results: Curvature of the pressure sensor was found to impart force, so the sensor needed to remain flat during measurements. A large rigid contact lens was designed to have a flat region to which the sensor was attached. To stabilise the contact lens during measurement, an apparatus was designed to hold and position the sensor and contact lens combination on the eye. A calibration system was designed to apply even pressure to the sensor when attached to the contact lens, so the raw digital output could be converted to actual pressure units. Conclusions: Several novel procedures were developed to use tactile sensors to measure eyelid pressure. The quantification of eyelid pressure has a number of applications including eyelid reconstructive surgery and the design of soft and rigid contact lenses.
Resumo:
The law recognises the right of a competent adult to refuse medical treatment even if this will lead to death. Guardianship and other legislation also facilitates the making of decisions to withhold or withdraw life-sustaining treatment in certain circumstances. Despite this apparent endorsement that such decisions can be lawful, doubts have been raised in Queensland about whether decisions to withhold or withdraw life-sustaining treatment would contravene the criminal law, and particularly the duty imposed by the Criminal Code (Qld) to provide the “necessaries of life”. This article considers this tension in the law and examines various arguments that might allow for such decisions to be made lawfully. It ultimately concludes, however, that criminal responsibility may still arise and so reform is needed.
Resumo:
The term structure of interest rates is often summarized using a handful of yield factors that capture shifts in the shape of the yield curve. In this paper, we develop a comprehensive model for volatility dynamics in the level, slope, and curvature of the yield curve that simultaneously includes level and GARCH effects along with regime shifts. We show that the level of the short rate is useful in modeling the volatility of the three yield factors and that there are significant GARCH effects present even after including a level effect. Further, we find that allowing for regime shifts in the factor volatilities dramatically improves the model’s fit and strengthens the level effect. We also show that a regime-switching model with level and GARCH effects provides the best out-of-sample forecasting performance of yield volatility. We argue that the auxiliary models often used to estimate term structure models with simulation-based estimation techniques should be consistent with the main features of the yield curve that are identified by our model.