130 resultados para Spread trading
Resumo:
The eyelids play an important role in lubricating and protecting the surface of the eye. Each blink serves to spread fresh tears, remove debris and replenish the smooth optical surface of the eye. Yet little is known about how the eyelids contact the ocular surface and what pressure distribution exists between the eyelids and cornea. As the principal refractive component of the eye, the cornea is a major element of the eye’s optics. The optical properties of the cornea are known to be susceptible to the pressure exerted by the eyelids. Abnormal eyelids, due to disease, have altered pressure on the ocular surface due to changes in the shape, thickness or position of the eyelids. Normal eyelids also cause corneal distortions that are most often noticed when they are resting closer to the corneal centre (for example during reading). There were many reports of monocular diplopia after reading due to corneal distortion, but prior to videokeratoscopes these localised changes could not be measured. This thesis has measured the influence of eyelid pressure on the cornea after short-term near tasks and techniques were developed to quantify eyelid pressure and its distribution. The profile of the wave-like eyelid-induced corneal changes and the refractive effects of these distortions were investigated. Corneal topography changes due to both the upper and lower eyelids were measured for four tasks involving two angles of vertical downward gaze (20° and 40°) and two near work tasks (reading and steady fixation). After examining the depth and shape of the corneal changes, conclusions were reached regarding the magnitude and distribution of upper and lower eyelid pressure for these task conditions. The degree of downward gaze appears to alter the upper eyelid pressure on the cornea, with deeper changes occurring after greater angles of downward gaze. Although the lower eyelid was further from the corneal centre in large angles of downward gaze, its effect on the cornea was greater than that of the upper eyelid. Eyelid tilt, curvature, and position were found to be influential in the magnitude of eyelid-induced corneal changes. Refractively these corneal changes are clinically and optically significant with mean spherical and astigmatic changes of about 0.25 D after only 15 minutes of downward gaze (40° reading and steady fixation conditions). Due to the magnitude of these changes, eyelid pressure in downward gaze offers a possible explanation for some of the day-to-day variation observed in refraction. Considering the magnitude of these changes and previous work on their regression, it is recommended that sustained tasks performed in downward gaze should be avoided for at least 30 minutes before corneal and refractive assessment requiring high accuracy. Novel procedures were developed to use a thin (0.17 mm) tactile piezoresistive pressure sensor mounted on a rigid contact lens to measure eyelid pressure. A hydrostatic calibration system was constructed to convert raw digital output of the sensors to actual pressure units. Conditioning the sensor prior to use regulated the measurement response and sensor output was found to stabilise about 10 seconds after loading. The influences of various external factors on sensor output were studied. While the sensor output drifted slightly over several hours, it was not significant over the measurement time of 30 seconds used for eyelid pressure, as long as the length of the calibration and measurement recordings were matched. The error associated with calibrating at room temperature but measuring at ocular surface temperature led to a very small overestimation of pressure. To optimally position the sensor-contact lens combination under the eyelid margin, an in vivo measurement apparatus was constructed. Using this system, eyelid pressure increases were observed when the upper eyelid was placed on the sensor and a significant increase was apparent when the eyelid pressure was increased by pulling the upper eyelid tighter against the eye. For a group of young adult subjects, upper eyelid pressure was measured using this piezoresistive sensor system. Three models of contact between the eyelid and ocular surface were used to calibrate the pressure readings. The first model assumed contact between the eyelid and pressure sensor over more than the pressure cell width of 1.14 mm. Using thin pressure sensitive carbon paper placed under the eyelid, a contact imprint was measured and this width used for the second model of contact. Lastly as Marx’s line has been implicated as the region of contact with the ocular surface, its width was measured and used as the region of contact for the third model. The mean eyelid pressures calculated using these three models for the group of young subjects were 3.8 ± 0.7 mmHg (whole cell), 8.0 ± 3.4 mmHg (imprint width) and 55 ± 26 mmHg (Marx’s line). The carbon imprints using Pressurex-micro confirmed previous suggestions that a band of the eyelid margin has primary contact with the ocular surface and provided the best estimate of the contact region and hence eyelid pressure. Although it is difficult to directly compare the results with previous eyelid pressure measurement attempts, the eyelid pressure calculated using this model was slightly higher than previous manometer measurements but showed good agreement with the eyelid force estimated using an eyelid tensiometer. The work described in this thesis has shown that the eyelids have a significant influence on corneal shape, even after short-term tasks (15 minutes). Instrumentation was developed using piezoresistive sensors to measure eyelid pressure. Measurements for the upper eyelid combined with estimates of the contact region between the cornea and the eyelid enabled quantification of the upper eyelid pressure for a group of young adult subjects. These techniques will allow further investigation of the interaction between the eyelids and the surface of the eye.
Resumo:
Secondary tasks such as cell phone calls or interaction with automated speech dialog systems (SDSs) increase the driver’s cognitive load as well as the probability of driving errors. This study analyzes speech production variations due to cognitive load and emotional state of drivers in real driving conditions. Speech samples were acquired from 24 female and 17 male subjects (approximately 8.5 h of data) while talking to a co-driver and communicating with two automated call centers, with emotional states (neutral, negative) and the number of necessary SDS query repetitions also labeled. A consistent shift in a number of speech production parameters (pitch, first format center frequency, spectral center of gravity, spectral energy spread, and duration of voiced segments) was observed when comparing SDS interaction against co-driver interaction; further increases were observed when considering negative emotion segments and the number of requested SDS query repetitions. A mel frequency cepstral coefficient based Gaussian mixture classifier trained on 10 male and 10 female sessions provided 91% accuracy in the open test set task of distinguishing co-driver interactions from SDS interactions, suggesting—together with the acoustic analysis—that it is possible to monitor the level of driver distraction directly from their speech.
Resumo:
This review chapter provides an overview of English language literacy education in the contexts of cultural and economic globalisation. Drawing case study examples of India and China, the authors outline three complementary models: the development paradigm, the hegemony paradigm and the new literacies paradigm. The analysis focuses on effects of the spread of English on vernacular languages and the non-synchronous issues raised by digital production cultures. Noting the difficulties of education systems in contending with new literacies - it argues for the reframing of transnational relations, global material conditions and new communications technologies as the objects of critical literacy education.
Resumo:
This paper examines the role of staff training and support in the experience of staff with online teaching at Australian Catholic University. Australian Catholic University’s online education is differentiated from other Australian universities for two reasons: 1) It has 6 nationally based campuses spread across a geographical reach of 2,500 kilometres, 2) It has no internal Flexible Learning or Distance Education unit as such, and out sources the provision of web based teaching and learning services to NextEd Pty Ltd, a commercial online education company. Both factors provide challenges and benefits to the effective training, support and coordination of online education in the university.
Resumo:
The forms social enterprises can take and the industries they operate in are so many and various that it has always been a challenge to define, find and count social enterprises. In 2009 Social Traders partnered with the Australian Centre for Philanthropy and Nonprofit Studies (ACPNS) at Queensland University of Technology to define social enterprise and, for the first time in Australia, to identify and map the social enterprise sector: its scope, its variety of forms, its reasons for trading, its financial dimensions, and the individuals and communities social enterprises aim to benefit.
Resumo:
International market access for fresh commodities is regulated by international accepted phytosanitary guidelines, the objectives of which are to reduce the biosecurity risk of plant pest and disease movement. Papua New Guinea (PNG) has identified banana as a potential export crop and to help meet international market access requirements, this thesis provides information for the development of a pest risk analysis (PRA) for PNG banana fruit. The PRA is a three step process which first identifies the pests associated with a particular commodity or pathway, then assesses the risk associated with those pests, and finally identifies risk management options for those pests if required. As the first step of the PRA process, I collated a definitive list on the organisms associated with the banana plant in PNG using formal literature, structured interviews with local experts, grey literature and unpublished file material held in PNG field research stations. I identified 112 organisms (invertebrates, vertebrate, pathogens and weeds) associated with banana in PNG, but only 14 of these were reported as commonly requiring management. For these 14 I present detailed information summaries on their known biology and pest impact. A major finding of the review was that of the 14 identified key pests, some research information occurs for 13. The single exception for which information was found to be lacking was Bactrocera musae (Tryon), the banana fly. The lack of information for this widely reported ‘major pest on PNG bananas’ would hinder the development of a PNG banana fruit PRA. For this reason the remainder of the thesis focused on this organism, particularly with respect to generation of information required by the PRA process. Utilising an existing, but previously unanalysed fruit fly trapping database for PNG, I carried out a Geographic Information System analysis of the distribution and abundance of banana in four major regions of PNG. This information is required for a PRA to determine if banana fruit grown in different parts of the country are at different risks from the fly. Results showed that the fly was widespread in all cropping regions and that temperature and rainfall were not significantly correlated with banana fly abundance. Abundance of the fly was significantly correlated (albeit weakly) with host availability. The same analysis was done with four other PNG pest fruit flies and their responses to the environmental factors differed to banana fly and each other. This implies that subsequent PRA analyses for other PNG fresh commodities will need to investigate the risk of each of these flies independently. To quantify the damage to banana fruit caused by banana fly in PNG, local surveys and one national survey of banana fruit infestation were carried out. Contrary to expectations, infestation was found to be very low, particularly in the widely grown commercial cultivar, Cavendish. Infestation of Cavendish fingers was only 0.41% in a structured, national survey of over 2 700 banana fingers. Follow up laboratory studies showed that fingers of Cavendish, and another commercial variety Lady-finger, are very poor hosts for B. musae, with very low host selection rates by female flies and very poor immature survival. An analysis of a recent (within last decade) incursion of B. musae into the Gazelle Peninsula of East New Britain Province, PNG, provided the final set of B. musae data. Surveys of the fly on the peninsular showed that establishment and spread of the fly in the novel environment was very rapid and thus the fly should be regarded as being of high biosecurity concern, at least in tropical areas. Supporting the earlier impact studies, however, banana fly has not become a significant banana fruit problem on the Gazelle, despite bananas being the primary starch staple of the region. The results of the research chapters are combined in the final Discussion in the form of a B. musae focused PRA for PNG banana fruit. Putting the thesis in a broader context, the Discussion also deals with the apparent discrepancy between high local abundance of banana fly and very low infestation rates. This discussion focuses on host utilisation patterns of specialist herbivores and suggests that local pest abundance, as determined by trapping or monitoring, need not be good surrogate for crop damage, despite this linkage being implicit in a number of international phytosanitary protocols.
Resumo:
This paper introduces an energy-efficient Rate Adaptive MAC (RA-MAC) protocol for long-lived Wireless Sensor Networks (WSN). Previous research shows that the dynamic and lossy nature of wireless communication is one of the major challenges to reliable data delivery in a WSN. RA-MAC achieves high link reliability in such situations by dynamically trading off radio bit rate for signal processing gain. This extra gain reduces the packet loss rate which results in lower energy expenditure by reducing the number of retransmissions. RA-MAC selects the optimal data rate based on channel conditions with the aim of minimizing energy consumption. We have implemented RA-MAC in TinyOS on an off-the-shelf sensor platform (TinyNode), and evaluated its performance by comparing RA-MAC with state-ofthe- art WSN MAC protocol (SCP-MAC) by experiments.
Resumo:
The high moisture content of mill mud (typically 75–80% for Australian factories) results in high transportation costs for the redistribution of mud onto cane farms. The high transportation cost relative to the nutrient value of the mill mud results in many milling companies subsidising the cost of this recycle to ensure a wide distribution across the cane supply area. An average mill would generate about 100 000 t of mud (at 75% moisture) in a crushing season. The development of mud processing facilities that will produce a low moisture mud that can be effectively incorporated into cane land with existing or modified spreading equipment will improve the cost efficiency of mud redistribution to farms; provide an economical fertiliser alternative to more farms in the supply area; and reduce the potential for adverse environmental impacts from farms. A research investigation assessing solid bowl decanter centrifuges to produce low moisture mud with low residual pol was undertaken and the results compared to the performance of existing rotary vacuum filters in factory trials. The decanters were operated on filter mud feed in parallel with the rotary vacuum filters to allow comparisons of performance. Samples of feed, mud product and filtrate were analysed to provide performance indicators. The decanter centrifuge could produce mud cakes with very low moistures and residual pol levels. Spreading trials in cane fields indicated that the dry cake could be spread easily by standard mud trucks and by trucks designed specifically to spread fertiliser.
Resumo:
Spectrum sensing is considered to be one of the most important tasks in cognitive radio. Many sensing detectors have been proposed in the literature, with the common assumption that the primary user is either fully present or completely absent within the window of observation. In reality, there are scenarios where the primary user signal only occupies a fraction of the observed window. This paper aims to analyse the effect of the primary user duty cycle on spectrum sensing performance through the analysis of a few common detectors. Simulations show that the probability of detection degrades severely with reduced duty cycle regardless of the detection method. Furthermore we show that reducing the duty cycle has a greater degradation on performance than lowering the signal strength.
Resumo:
The genre of narratives has become the genre of choice in many classrooms since the introduction of NAPLAN into Australian schools. Yet, Knapp and Watkins (2005) argue that narratives are the least understood of all the genres. Despite wide-spread acceptance that narratives serve the social purpose of entertaining, they can also be more edgy, offering a powerful social or information role. This paper considers the effects of exposing novices to less standard realms of social discourse and disciplinary knowledge vis-a-vis a more clinical treatment focused on ‘standard’ narratives. I argue that we should not shy away from the challenges of edgy narratives just because our students are novice readers. The same holds true for our work in communities on the edge, that is where poverty, multiculturalism or multilingualism and systemic failure are the norm. I am part of an Australian Research Council (ARC) Linkage Grant (LP 0990289) working in such a community. Like many such situations, teachers in these communities are caught in the fray of establishing a dialogue between the culture of federally mandated performance orientated reforms and the cultures and discourses of the lives and future needs of their students (see Exley & Singh, in press).
Resumo:
This thesis examines the advanced North American environmental mitigation schemes for their applicability to Queensland. Compensatory wetland mitigation banking, in particular, is concerned with in-perpetuity management and protection - the basic concerns of the Queensland public about its unique environment. The process has actively engaged the North American market and become a thriving industry that (for the most part) effectively designs, creates and builds (or enhances) environmental habitat. A methodology was designed to undertake a comprehensive review of the history, evolution and concepts of the North American wetland mitigation banking system - before and after the implementation of a significant new compensatory wetland mitigation banking regulation in 2008. The Delphi technique was then used to determine the principles and working components of wetland mitigation banking. Results were then applied to formulate a questionnaire to review Australian marketbased instruments (including offsetting policies) against these North American principles. Following this, two case studies established guiding principles for implementation based on two components of the North American wetland mitigation banking program. The subsequent outcomes confirmed that environmental banking is a workable concept in North America and that it is worth applying in Queensland. The majority of offsetting policies in Australia have adopted some principles of the North American mitigation programs. Examination reveals that however, they fail to provide adequate incentives for private landowners to participate because the essential trading mechanisms are not employed. Much can thus be learnt from the North American situation - where private enterprise has devised appropriate free market concepts. The consequent environmental banking process (as adapted from the North American programs) should be implemented in Queensland. It can then focus here on engaging the private sector, where the majority of naturally productive lands are managed.
Resumo:
The increase of buyer-driven supply chains, outsourcing and other forms of non-traditional employment has resulted in challenges for labour market regulation. One business model which has created substantial regulatory challenges is supply chains. The supply chain model involves retailers purchasing products from brand corporations who then outsource the manufacturing of the work to traders who contract with factories or outworkers who actually manufacture the clothing and textiles. This business model results in time and cost pressures being pushed down the supply chain which has resulted in sweatshops where workers systematically have their labour rights violated. Literally millions of workers work in dangerous workplaces where thousands are killed or permanently disabled every year. This thesis has analysed possible regulatory responses to provide workers a right to safety and health in supply chains which provide products for Australian retailers. This thesis will use a human rights standard to determine whether Australia is discharging its human rights obligations in its approach to combating domestic and foreign labour abuses. It is beyond this thesis to analyse Occupational Health and Safety (OHS) laws in every jurisdiction. Accordingly, this thesis will focus upon Australian domestic laws and laws in one of Australia’s major trading partners, the Peoples’ Republic of China (China). It is hypothesised that Australia is currently breaching its human rights obligations through failing to adequately regulate employees’ safety at work in Australian-based supply chains. To prove this hypothesis, this thesis will adopt a three- phase approach to analysing Australia’s regulatory responses. Phase 1 will identify the standard by which Australia’s regulatory approach to employees’ health and safety in supply chains can be judged. This phase will focus on analysing how workers’ rights to safety as a human right imposes a moral obligation on Australia to take reasonablely practicable steps regulate Australian-based supply chains. This will form a human rights standard against which Australia’s conduct can be judged. Phase 2 focuses upon the current regulatory environment. If existing regulatory vehicles adequately protect the health and safety of employees, then Australia will have discharged its obligations through simply maintaining the status quo. Australia currently regulates OHS through a combination of ‘hard law’ and ‘soft law’ regulatory vehicles. The first part of phase 2 analyses the effectiveness of traditional OHS laws in Australia and in China. The final part of phase 2 then analyses the effectiveness of the major soft law vehicle ‘Corporate Social Responsibility’ (CSR). The fact that employees are working in unsafe working conditions does not mean Australia is breaching its human rights obligations. Australia is only required to take reasonably practicable steps to ensure human rights are realized. Phase 3 identifies four regulatory vehicles to determine whether they would assist Australia in discharging its human rights obligations. Phase 3 then analyses whether Australia could unilaterally introduce supply chain regulation to regulate domestic and extraterritorial supply chains. Phase 3 also analyses three public international law regulatory vehicles. This chapter considers the ability of the United Nations Global Compact, the ILO’s Better Factory Project and a bilateral agreement to improve the detection and enforcement of workers’ right to safety and health.
Resumo:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
Resumo:
During the past decade, a significant amount of research has been conducted internationally with the aim of developing, implementing, and verifying "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures. Application of these methods permits comprehensive assessment of the actual failure modes and ultimate strengths of structural systems in practical design situations, without resort to simplified elastic methods of analysis and semi-empirical specification equations. Advanced analysis has the potential to extend the creativity of structural engineers and simplify the design process, while ensuring greater economy and more uniform safety with respect to the ultimate limit state. The application of advanced analysis methods has previously been restricted to steel frames comprising only members with compact cross-sections that are not subject to the effects of local buckling. This precluded the use of advanced analysis from the design of steel frames comprising a significant proportion of the most commonly used Australian sections, which are non-compact and subject to the effects of local buckling. This thesis contains a detailed description of research conducted over the past three years in an attempt to extend the scope of advanced analysis by developing methods that include the effects of local buckling in a non-linear analysis formulation, suitable for practical design of steel frames comprising non-compact sections. Two alternative concentrated plasticity formulations are presented in this thesis: the refined plastic hinge method and the pseudo plastic zone method. Both methods implicitly account for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. The accuracy and precision of the methods for the analysis of steel frames comprising non-compact sections has been established by comparison with a comprehensive range of analytical benchmark frame solutions. Both the refined plastic hinge and pseudo plastic zone methods are more accurate and precise than the conventional individual member design methods based on elastic analysis and specification equations. For example, the pseudo plastic zone method predicts the ultimate strength of the analytical benchmark frames with an average conservative error of less than one percent, and has an acceptable maximum unconservati_ve error of less than five percent. The pseudo plastic zone model can allow the design capacity to be increased by up to 30 percent for simple frames, mainly due to the consideration of inelastic redistribution. The benefits may be even more significant for complex frames with significant redundancy, which provides greater scope for inelastic redistribution. The analytical benchmark frame solutions were obtained using a distributed plasticity shell finite element model. A detailed description of this model and the results of all the 120 benchmark analyses are provided. The model explicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. Its accuracy was verified by comparison with a variety of analytical solutions and the results of three large-scale experimental tests of steel frames comprising non-compact sections. A description of the experimental method and test results is also provided.