106 resultados para recovered stutterers
em Queensland University of Technology - ePrints Archive
Resumo:
The decision in Hook v Boreham & QBE Insurance (Australia) Limited [2006] QDC 304 considered whether the court should go further than order that costs be assessed on the indemnity basis, but should also specify the basis by which those indemnity costs should be determined. The decision makes it clear that under r704(3) of the Uniform Civil Procedure Rules, questions of that nature are ordinarily preserved to the discretion of the Registrar.
Resumo:
Australian mosquitoes from which Japanese encephalitis virus (JEV) has been recovered (Culex annulirostris, Culex gelidus, and Aedes vigilax) were assessed for their ability to be infected with the ChimeriVax-JE vaccine, with yellow fever vaccine virus 17D (YF 17D) from which the backbone of ChimeriVax-JE vaccine is derived and with JEV-Nakayama. None of the mosquitoes became infected after being fed orally with 6.1 log(10) plaque-forming units (PFU)/mL of ChimeriVax-JE vaccine, which is greater than the peak viremia in vaccinees (mean peak viremia = 4.8 PFU/mL, range = 0-30 PFU/mL of 0.9 days mean duration, range = 0-11 days). Some members of all three species of mosquito became infected when fed on JEV-Nakayama, but only Ae. vigilax was infected when fed on YF 17D. The results suggest that none of these three species of mosquito are likely to set up secondary cycles of transmission of ChimeriVax-JE in Australia after feeding on a viremic vaccinee.
Resumo:
Employer non-compliance with workers’ entitlements is an area seldom explored in Australian industrial relations, generally considered uncommon or the province of ‘rogue’ employers. This paper provides a picture of the categories of entitlements against which complaints of evasion were made in the federal industrial relations jurisdiction in Australia, between 1986 and 1995 and the characteristics of complainants. The “top 30” awards ranked by extent of underpayment recovered by the federal enforcement agency (1987-95) are also explored to support arguments that intense competition, reduced union density, precarious employment, youth and being female are strongly associated with employer evasion. The increasing prevalence of these factors in the labour market suggests that employer compliance should be more carefully explored in the Australian context.
Resumo:
The analysis and value of digital evidence in an investigation has been the domain of discourse in the digital forensic community for several years. While many works have considered different approaches to model digital evidence, a comprehensive understanding of the process of merging different evidence items recovered during a forensic analysis is still a distant dream. With the advent of modern technologies, pro-active measures are integral to keeping abreast of all forms of cyber crimes and attacks. This paper motivates the need to formalize the process of analyzing digital evidence from multiple sources simultaneously. In this paper, we present the forensic integration architecture (FIA) which provides a framework for abstracting the evidence source and storage format information from digital evidence and explores the concept of integrating evidence information from multiple sources. The FIA architecture identifies evidence information from multiple sources that enables an investigator to build theories to reconstruct the past. FIA is hierarchically composed of multiple layers and adopts a technology independent approach. FIA is also open and extensible making it simple to adapt to technological changes. We present a case study using a hypothetical car theft case to demonstrate the concepts and illustrate the value it brings into the field.
Resumo:
Ophthalmic wavefront sensors typically measure wavefront slope, from which wavefront phase is reconstructed. We show that ophthalmic prescriptions (in power-vector format) can be obtained directly from slope measurements without wavefront reconstruction. This is achieved by fitting the measurement data with a new set of orthonormal basis functions called Zernike radial slope polynomials. Coefficients of this expansion can be used to specify the ophthalmic power vector using explicit formulas derived by a variety of methods. Zernike coefficients for wavefront error can be recovered from the coefficients of radial slope polynomials, thereby offering an alternative way to perform wavefront reconstruction.
Resumo:
Objective: To investigate the acute effects of isolated eccentric and concentric calf muscle exercise on Achilles tendon sagittal thickness. ---------- Design: Within-subject, counterbalanced, mixed design. ---------- Setting: Institutional. ---------- Participants: 11 healthy, recreationally active male adults. ---------- Interventions: Participants performed an exercise protocol, which involved isolated eccentric loading of the Achilles tendon of a single limb and isolated concentric loading of the contralateral, both with the addition of 20% bodyweight. ---------- Main outcome measurements: Sagittal sonograms were acquired prior to, immediately following and 3, 6, 12 and 24 h after exercise. Tendon thickness was measured 2 cm proximal to the superior aspect of the calcaneus. ---------- Results: Both loading conditions resulted in an immediate decrease in normalised Achilles tendon thickness. Eccentric loading induced a significantly greater decrease than concentric loading despite a similar impulse (−0.21 vs −0.05, p<0.05). Post-exercise, eccentrically loaded tendons recovered exponentially, with a recovery time constant of 2.5 h. The same exponential function did not adequately model changes in tendon thickness resulting from concentric loading. Even so, recovery pathways subsequent to the 3 h time point were comparable. Regardless of the exercise protocol, full tendon thickness recovery was not observed until 24 h. ---------- Conclusions: Eccentric loading invokes a greater reduction in Achilles tendon thickness immediately after exercise but appears to recover fully in a similar time frame to concentric loading.
Resumo:
The challenge of persistent navigation and mapping is to develop an autonomous robot system that can simultaneously localize, map and navigate over the lifetime of the robot with little or no human intervention. Most solutions to the simultaneous localization and mapping (SLAM) problem aim to produce highly accurate maps of areas that are assumed to be static. In contrast, solutions for persistent navigation and mapping must produce reliable goal-directed navigation outcomes in an environment that is assumed to be in constant flux. We investigate the persistent navigation and mapping problem in the context of an autonomous robot that performs mock deliveries in a working office environment over a two-week period. The solution was based on the biologically inspired visual SLAM system, RatSLAM. RatSLAM performed SLAM continuously while interacting with global and local navigation systems, and a task selection module that selected between exploration, delivery, and recharging modes. The robot performed 1,143 delivery tasks to 11 different locations with only one delivery failure (from which it recovered), traveled a total distance of more than 40 km over 37 hours of active operation, and recharged autonomously a total of 23 times.
Resumo:
Poet's statement: My father died of pancreatic cancer a few years ago, and since then other family members and friends have developed cancer. Some have recovered, perhaps temporarily, while for others the prospect is one of inevitable decline, raising questions about when the point is reached where death is preferable to life. This poem expresses the ambiguity of visceral urges which could be towards either continued life or a relieving death.
Resumo:
The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia.
Resumo:
The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia
Resumo:
A membrane filtration plant using suitable micro or ultra-filtration membranes has the potential to significantly increase pan stage capacity and improve sugar quality. Previous investigations by SRI and others have shown that membranes will remove polysaccharides, turbidity and colloidal impurities and result in lower viscosity syrups and molasses. However, the conclusion from those investigations was that membrane filtration was not economically viable. A comprehensive assessment of current generation membrane technology was undertaken by SRI. With the aid of two pilot plants provided by Applexion and Koch Membrane Systems, extensive trials were conducted at an Australian factory using clarified juice at 80–98°C as feed to each pilot plant. Conditions were varied during the trials to examine the effect of a range of operating parameters on the filtering characteristics of each of the membranes. These parameters included feed temperature and pressure, flow velocity, soluble solids and impurity concentrations. The data were then combined to develop models to predict the filtration rate (or flux) that could be expected for nominated operating conditions. The models demonstrated very good agreement with the data collected during the trials. The trials also identified those membranes that provided the highest flux levels per unit area of membrane surface for a nominated set of conditions. Cleaning procedures were developed that ensured the water flux level was recovered following a clean-in-place process. Bulk samples of clarified juice and membrane filtered juice from each pilot were evaporated to syrup to quantify the gain in pan stage productivity that results from the removal of high molecular weight impurities by membrane filtration. The results are in general agreement with those published by other research groups.
Resumo:
Employer non-compliance with workers’ entitlements has been largely ignored in Australian industrial relations. The legal and regulatory literature however, identifies arguments relating to employer propensity to evade regulatory requirements, as well as highlighting environmental factors that may influence such behaviour. This article explores these issues in the Australian federal industrial relations jurisdiction, as well as providing a picture of employer evasion of minimum labour standards between 1986 and 1995: who is exploited and in respect of what entitlements. Industry contexts and common characteristics of non-compliance are outlined by exploration of 30 awards ranked by the extent of underpayments recovered by the federal inspectorate during the period. Employer evasion of workers’ entitlements is arguably a calculated business decision, prompted or facilitated by intense competition, precarious employment (particularly female and youth), non-unionized workplaces and under-resourced enforcement agencies.
Resumo:
Wireless Multi-media Sensor Networks (WMSNs) have become increasingly popular in recent years, driven in part by the increasing commoditization of small, low-cost CMOS sensors. As such, the challenge of automatically calibrating these types of cameras nodes has become an important research problem, especially for the case when a large quantity of these type of devices are deployed. This paper presents a method for automatically calibrating a wireless camera node with the ability to rotate around one axis. The method involves capturing images as the camera is rotated and computing the homographies between the images. The camera parameters, including focal length, principal point and the angle and axis of rotation can then recovered from two or more homographies. The homography computation algorithm is designed to deal with the limited resources of the wireless sensor and to minimize energy con- sumption. In this paper, a modified RANdom SAmple Consensus (RANSAC) algorithm is proposed to effectively increase the efficiency and reliability of the calibration procedure.
Resumo:
For a mobile robot to operate autonomously in real-world environments, it must have an effective control system and a navigation system capable of providing robust localization, path planning and path execution. In this paper we describe the work investigating synergies between mapping and control systems. We have integrated development of a control system for navigating mobile robots and a robot SLAM system. The control system is hybrid in nature and tightly coupled with the SLAM system; it uses a combination of high and low level deliberative and reactive control processes to perform obstacle avoidance, exploration, global navigation and recharging, and draws upon the map learning and localization capabilities of the SLAM system. The effectiveness of this hybrid, multi-level approach was evaluated in the context of a delivery robot scenario. Over a period of two weeks the robot performed 1143 delivery tasks to 11 different locations with only one delivery failure (from which it recovered), travelled a total distance of more than 40km, and recharged autonomously a total of 23 times. In this paper we describe the combined control and SLAM system and discuss insights gained from its successful application in a real-world context.
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.