176 resultados para exponentially weighted moving average
Resumo:
Abstract Being as a relatively new approach of signalling, moving-block scheme significantly increases line capacity, especially on congested railways. This paper describes a simulation system for multi-train operation under moving-block signalling scheme. The simulator can be used to calculate minimum headways and safety characteristics under pre-set timetables or headways and different geographic and traction conditions. Advanced software techniques are adopted to support the flexibility within the simulator so that it is a general-purpose computer-aided design tool to evaluate the performance of moving block signalling.
Resumo:
Advances in digital technology have caused a radical shift in moving image culture. This has occurred in both modes of production and sites of exhibition, resulting in a blurring of boundaries that previously defined a range of creative disciplines. Re-Imagining Animation: The Changing Face of the Moving Image, by Paul Wells and Johnny Hardstaff, argues that as a result of these blurred disciplinary boundaries, the term “animation” has become a “catch all” for describing any form of manipulated moving image practice. Understanding animation predicates the need to (re)define the medium within contemporary moving image culture. Via a series of case studies, the book engages with a range of moving image works, interrogating “how the many and varied approaches to making film, graphics, visual artefacts, multimedia and other intimations of motion pictures can now be delineated and understood” (p. 7). The structure and clarity of content make this book ideally suited to any serious study of contemporary animation which accepts animation as a truly interdisciplinary medium.
Resumo:
This short paper suggests that the categories of ‘transformational’ and ‘transactional’ leadership styles ( see Burns 1972) may provide analytical purchase on the question of whether current women leaders have radically different styles and approaches to the earlier second wave feminist generation. The two cases chosen for this paper are the senior women in the Labor and Liberal Parties – Julia Gillard and Julie Bishop. The evidence – explored below – indicates there are strong transactional qualities to both women leaders.
Resumo:
Purpose: The purpose of this review was to present an in-depth analysis of literature identifying the extent of dropout from Internet-based treatment programmes for psychological disorders, and literature exploring the variables associated with dropout from such programmes. ----- ----- Methods: A comprehensive literature search was conducted on PSYCHINFO and PUBMED with the keywords: dropouts, drop out, dropout, dropping out, attrition, premature termination, termination, non-compliance, treatment, intervention, and program, each in combination with the key words Internet and web. A total of 19 studies published between 1990 and April 2009 and focusing on dropout from Internet-based treatment programmes involving minimal therapist contact were identified and included in the review. ----- ----- Results: Dropout ranged from 2 to 83% and a weighted average of 31% of the participants dropped out of treatment. A range of variables have been examined for their association with dropout from Internet-based treatment programmes for psychological disorders. Despite the numerous variables explored, evidence on any specific variables that may make an individual more likely to drop out of Internet-based treatment is currently limited. ----- ----- Conclusions: This review highlights the need for more rigorous and theoretically guided research exploring the variables associated with dropping out of Internet-based treatment for psychological disorders.
Resumo:
This paper presents a method of voice activity detection (VAD) for high noise scenarios, using a noise robust voiced speech detection feature. The developed method is based on the fusion of two systems. The first system utilises the maximum peak of the normalised time-domain autocorrelation function (MaxPeak). The second zone system uses a novel combination of cross-correlation and zero-crossing rate of the normalised autocorrelation to approximate a measure of signal pitch and periodicity (CrossCorr) that is hypothesised to be noise robust. The score outputs by the two systems are then merged using weighted sum fusion to create the proposed autocorrelation zero-crossing rate (AZR) VAD. Accuracy of AZR was compared to state of the art and standardised VAD methods and was shown to outperform the best performing system with an average relative improvement of 24.8% in half-total error rate (HTER) on the QUT-NOISE-TIMIT database created using real recordings from high-noise environments.
Resumo:
Social tags are an important information source in Web 2.0. They can be used to describe users’ topic preferences as well as the content of items to make personalized recommendations. However, since tags are arbitrary words given by users, they contain a lot of noise such as tag synonyms, semantic ambiguities and personal tags. Such noise brings difficulties to improve the accuracy of item recommendations. To eliminate the noise of tags, in this paper we propose to use the multiple relationships among users, items and tags to find the semantic meaning of each tag for each user individually. With the proposed approach, the relevant tags of each item and the tag preferences of each user are determined. In addition, the user and item-based collaborative filtering combined with the content filtering approach are explored. The effectiveness of the proposed approaches is demonstrated in the experiments conducted on real world datasets collected from Amazon.com and citeULike website.
Resumo:
Social tags in web 2.0 are becoming another important information source to describe the content of items as well as to profile users’ topic preferences. However, as arbitrary words given by users, tags contains a lot of noise such as tag synonym and semantic ambiguity a large number personal tags that only used by one user, which brings challenges to effectively use tags to make item recommendations. To solve these problems, this paper proposes to use a set of related tags along with their weights to represent semantic meaning of each tag for each user individually. A hybrid recommendation generation approaches that based on the weighted tags are proposed. We have conducted experiments using the real world dataset obtained from Amazon.com. The experimental results show that the proposed approaches outperform the other state of the art approaches.
Resumo:
The availability of new information and communication technologies creates opportunities for new, mobile tele-health services. While many promising tele-health projects deliver working R&D prototypes, they often do not result in actual deployment. We aim to identify critical issues than can increase our understanding and enhance the viability of the mobile tele-health services beyond the R&D phase by developing a business model. The present study describes the systematic development and evaluation of a service-oriented business model for tele-monitoring and -treatment of chronic lower back pain patients based on a mobile technology prototype. We address challenges of multi-sector collaboration and disruptive innovation.
Resumo:
This paper considers the contentious space between self-affirmation and selfpreoccupation in Elizabeth Gilbert’s popular travel memoir, Eat, Pray, Love. Following the surveillance of the female confessant, the female traveller has recently come under close scrutiny and public suspicion. She is accused of walking a fine line between critical self-insight and obsessive selfimportance and her travel narratives are branded as accounts of navel gazing that are less concerned with what is seen than with who is doing the seeing. In reading these themes against the backdrop of women’s travel, the possibility arises that the culture of narcissism is increasingly read as a female discursive practice, concerned with authorship, privacy and the subjectivity of truth. The novel, which has been praised by some as ‘the ultimate guide to balanced living’ and dismissed by others as ‘self-serving junk’, poses questions about the requisites in Western culture for being a female traveller and for telling a story that focuses primarily on the self.
Resumo:
An existing model for solvent penetration and drug release from a spherically-shaped polymeric drug delivery device is revisited. The model has two moving boundaries, one that describes the interface between the glassy and rubbery states of polymer, and another that defines the interface between the polymer ball and the pool of solvent. The model is extended so that the nonlinear diffusion coefficient of drug explicitly depends on the concentration of solvent, and the resulting equations are solved numerically using a front-fixing transformation together with a finite difference spatial discretisation and the method of lines. We present evidence that our scheme is much more accurate than a previous scheme. Asymptotic results in the small-time limit are presented, which show how the use of a kinetic law as a boundary condition on the innermost moving boundary dictates qualitative behaviour, the scalings being very different to the similar moving boundary problem that arises from modelling the melting of an ice ball. The implication is that the model considered here exhibits what is referred to as ``non-Fickian'' or Case II diffusion which, together with the initially constant rate of drug release, has certain appeal from a pharmaceutical perspective.
Resumo:
There have been notable advances in learning to control complex robotic systems using methods such as Locally Weighted Regression (LWR). In this paper we explore some potential limits of LWR for robotic applications, particularly investigating its application to systems with a long horizon of temporal dependence. We define the horizon of temporal dependence as the delay from a control input to a desired change in output. LWR alone cannot be used in a temporally dependent system to find meaningful control values from only the current state variables and output, as the relationship between the input and the current state is under-constrained. By introducing a receding horizon of the future output states of the system, we show that sufficient constraint is applied to learn good solutions through LWR. The new method, Receding Horizon Locally Weighted Regression (RH-LWR), is demonstrated through one-shot learning on a real Series Elastic Actuator controlling a pendulum.
Resumo:
While recent research has provided valuable information as to the composition of laser printer particles, their formation mechanisms, and explained why some printers are emitters whilst others are low emitters, fundamental questions relating to the potential exposure of office workers remained unanswered. In particular, (i) what impact does the operation of laser printers have on the background particle number concentration (PNC) of an office environment over the duration of a typical working day?; (ii) what is the airborne particle exposure to office workers in the vicinity of laser printers; (iii) what influence does the office ventilation have upon the transport and concentration of particles?; (iv) is there a need to control the generation of, and/or transport of particles arising from the operation of laser printers within an office environment?; (v) what instrumentation and methodology is relevant for characterising such particles within an office location? We present experimental evidence on printer temporal and spatial PNC during the operation of 107 laser printers within open plan offices of five buildings. We show for the first time that the eight-hour time-weighted average printer particle exposure is significantly less than the eight-hour time-weighted local background particle exposure, but that peak printer particle exposure can be greater than two orders of magnitude higher than local background particle exposure. The particle size range is predominantly ultrafine (< 100nm diameter). In addition we have established that office workers are constantly exposed to non-printer derived particle concentrations, with up to an order of magnitude difference in such exposure amongst offices, and propose that such exposure be controlled along with exposure to printer derived particles. We also propose, for the first time, that peak particle reference values be calculated for each office area analogous to the criteria used in Australia and elsewhere for evaluating exposure excursion above occupational hazardous chemical exposure standards. A universal peak particle reference value of 2.0 x 104 particles cm-3 has been proposed.
Resumo:
A forced landing is an unscheduled event in flight requiring an emergency landing, and is most commonly attributed to engine failure, failure of avionics or adverse weather. Since the ability to conduct a successful forced landing is the primary indicator for safety in the aviation industry, automating this capability for unmanned aerial vehicles (UAVs) will help facilitate their integration into, and subsequent routine operations over civilian airspace. Currently, there is no commercial system available to perform this task; however, a team at the Australian Research Centre for Aerospace Automation (ARCAA) is working towards developing such an automated forced landing system. This system, codenamed Flight Guardian, will operate onboard the aircraft and use machine vision for site identification, artificial intelligence for data assessment and evaluation, and path planning, guidance and control techniques to actualize the landing. This thesis focuses on research specific to the third category, and presents the design, testing and evaluation of a Trajectory Generation and Guidance System (TGGS) that navigates the aircraft to land at a chosen site, following an engine failure. Firstly, two algorithms are developed that adapts manned aircraft forced landing techniques to suit the UAV planning problem. Algorithm 1 allows the UAV to select a route (from a library) based on a fixed glide range and the ambient wind conditions, while Algorithm 2 uses a series of adjustable waypoints to cater for changing winds. A comparison of both algorithms in over 200 simulated forced landings found that using Algorithm 2, twice as many landings were within the designated area, with an average lateral miss distance of 200 m at the aimpoint. These results present a baseline for further refinements to the planning algorithms. A significant contribution is seen in the design of the 3-D Dubins Curves planning algorithm, which extends the elementary concepts underlying 2-D Dubins paths to account for powerless flight in three dimensions. This has also resulted in the development of new methods in testing for path traversability, in losing excess altitude, and in the actual path formation to ensure aircraft stability. Simulations using this algorithm have demonstrated lateral and vertical miss distances of under 20 m at the approach point, in wind speeds of up to 9 m/s. This is greater than a tenfold improvement on Algorithm 2 and emulates the performance of manned, powered aircraft. The lateral guidance algorithm originally developed by Park, Deyst, and How (2007) is enhanced to include wind information in the guidance logic. A simple assumption is also made that reduces the complexity of the algorithm in following a circular path, yet without sacrificing performance. Finally, a specific method of supplying the correct turning direction is also used. Simulations have shown that this new algorithm, named the Enhanced Nonlinear Guidance (ENG) algorithm, performs much better in changing winds, with cross-track errors at the approach point within 2 m, compared to over 10 m using Park's algorithm. A fourth contribution is made in designing the Flight Path Following Guidance (FPFG) algorithm, which uses path angle calculations and the MacCready theory to determine the optimal speed to fly in winds. This algorithm also uses proportional integral- derivative (PID) gain schedules to finely tune the tracking accuracies, and has demonstrated in simulation vertical miss distances of under 2 m in changing winds. A fifth contribution is made in designing the Modified Proportional Navigation (MPN) algorithm, which uses principles from proportional navigation and the ENG algorithm, as well as methods specifically its own, to calculate the required pitch to fly. This algorithm is robust to wind changes, and is easily adaptable to any aircraft type. Tracking accuracies obtained with this algorithm are also comparable to those obtained using the FPFG algorithm. For all three preceding guidance algorithms, a novel method utilising the geometric and time relationship between aircraft and path is also employed to ensure that the aircraft is still able to track the desired path to completion in strong winds, while remaining stabilised. Finally, a derived contribution is made in modifying the 3-D Dubins Curves algorithm to suit helicopter flight dynamics. This modification allows a helicopter to autonomously track both stationary and moving targets in flight, and is highly advantageous for applications such as traffic surveillance, police pursuit, security or payload delivery. Each of these achievements serves to enhance the on-board autonomy and safety of a UAV, which in turn will help facilitate the integration of UAVs into civilian airspace for a wider appreciation of the good that they can provide. The automated UAV forced landing planning and guidance strategies presented in this thesis will allow the progression of this technology from the design and developmental stages, through to a prototype system that can demonstrate its effectiveness to the UAV research and operations community.
Resumo:
Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with uncertainty in the parameters of a Markov Decision Process (MDP). Unlike the case of an MDP, the notion of an optimal policy for a BMDP is not entirely straightforward. We consider two notions of optimality based on optimistic and pessimistic criteria. These have been analyzed for discounted BMDPs. Here we provide results for average reward BMDPs. We establish a fundamental relationship between the discounted and the average reward problems, prove the existence of Blackwell optimal policies and, for both notions of optimality, derive algorithms that converge to the optimal value function.