819 resultados para Minimal entropy martingale measure
Resumo:
Transportation disadvantage has been recognised to be the key source of social exclusion. Therefore an appropriate process is required to investigate and seek to resolve this problem. Currently, determination of Transportation Disadvantage is postulate based on income, poverty and mobility level. Transportation disadvantage may best regard be based on accessibility perspectives as they represent inability of the individual to access desired activities. This paper attempts to justify a process in determining transportation disadvantage by incorporating accessibility and social transporation conflict as the essence of a framework. The framework embeds space time organisation within the dimension of accessibility to identify a rigorous definition of transportation disadvantage. In developing the framework, the definition, dimension, component and measure of accessibility were scrutinised. The findings suggest the definition and dimension are the significant approach of research to evaluate travel experience of the disadvantaged. Concurrently, location accessibility measures will be incorprated to strenghten the determination of accessibility level. Literature review in social exclusion and mobility-related exclusion identified the dimension and source of transportation disadvantage. It was revealed that the appropriate approach to justify trasnportation disadvantaged is to incorporate space-time organisation within the studied components. The suggested framework is an inter-related process consisting of component of accessibility; individual, networking (transport system) and activities (destination). The integration and correlation among the components shall determine the level of transportation disadvantage. Prior findings are used to retrieve the spatial distribution of transportation disadvantaged and appropriate policies are developed to resolve the problems.
Resumo:
In the field of semantic grid, QoS-based Web service composition is an important problem. In semantic and service rich environment like semantic grid, the emergence of context constraints on Web services is very common making the composition consider not only QoS properties of Web services, but also inter service dependencies and conflicts which are formed due to the context constraints imposed on Web services. In this paper, we present a repair genetic algorithm, namely minimal-conflict hill-climbing repair genetic algorithm, to address the Web service composition optimization problem in the presence of domain constraints and inter service dependencies and conflicts. Experimental results demonstrate the scalability and effectiveness of the genetic algorithm.
Resumo:
To investigate whether venous occlusion plethysmography (VOP) may be used to measure high rates of arterial inflow associated with exercise, venous occlusions were performed at rest, and following dynamic handgrip exercise at 15, 30, 45, and 60 % of maximum voluntary contraction (MVC) in seven healthy males. The effect of including more than one cardiac cycle in the calculation of blood flow was assessed by comparing the cumulative blood flow over one, two, three, or four cardiac cycles. The inclusion of more than one cardiac cycle at 30 and 60 % MVC, and more than two cardiac cycles at 15 and 45 % MVC resulted in a lower blood flow compared to using only the first cardiac cycle (P < 0.05). Despite the small time interval over which arterial inflow was measured (~1 second), this did not affect the reproducibility of the technique. Reproducibility (coefficient of variation for arterial inflow over three trials) tended to be poorer at the higher workloads, although this was not significant (12.7 ± 6.6 %, 16.2 ± 7.3 %, and 22.9 ± 9.9 % for the 15, 30, and 45 % MVC workloads; P=0.102). There was also a tendency for greater reproducibility with the inclusion of more cardiac cycles at the highest workload, but this did not reach significance (P=0.070). In conclusion, when calculated over the first cardiac cycle only during venous occlusion, high rates of FBF can be measured using VOP, and this can be achieved without a significant decrease in the reproducibility of the measurement.
Resumo:
Purpose: The cornea is known to be susceptible to forces exerted by eyelids. There have been previous attempts to quantify eyelid pressure but the reliability of the results is unclear. The purpose of this study was to develop a technique using piezoresistive pressure sensors to measure upper eyelid pressure on the cornea. Methods: The technique was based on the use of thin (0.18 mm) tactile piezoresistive pressure sensors, which generate a signal related to the applied pressure. A range of factors that influence the response of this pressure sensor were investigated along with the optimal method of placing the sensor in the eye. Results: Curvature of the pressure sensor was found to impart force, so the sensor needed to remain flat during measurements. A large rigid contact lens was designed to have a flat region to which the sensor was attached. To stabilise the contact lens during measurement, an apparatus was designed to hold and position the sensor and contact lens combination on the eye. A calibration system was designed to apply even pressure to the sensor when attached to the contact lens, so the raw digital output could be converted to actual pressure units. Conclusions: Several novel procedures were developed to use tactile sensors to measure eyelid pressure. The quantification of eyelid pressure has a number of applications including eyelid reconstructive surgery and the design of soft and rigid contact lenses.
Resumo:
Purpose: To evaluate the psychometric properties of a Chinese version of the Diabetes Coping Measure (DCM-C) scale.----- Methods: A self-administered questionnaire was completed by 205 people with type 2 diabetes from the endocrine outpatient departments of three hospitals in Taiwan. Confirmatory factor analysis, criterion validity, and internal consistency reliability were conducted to evaluate the psychometric properties of the DCM-C.----- Findings: Confirmatory factor analysis confirmed a four-factor structure (χ2 /df ratio=1.351, GFI=.904, CFI=.902, RMSEA=.041). The DCM-C was significantly associated with HbA1c and diabetes self-care behaviors. Internal consistency reliability of the total DCM-C scale was .74. Cronbach’s alpha coefficients for each subscale of the DCM-C ranged from .37 (tackling spirit) to .66 (diabetes integration).----- Conclusions: The DCM-C demonstrated satisfactory reliability and validity to determine the use of diabetes coping strategies. The tackling spirit dimension needs further refinement when applies this scale to Chinese populations with diabetes.----- Clinical Relevance: Healthcare providers who deal with Chinese people with diabetes can use the DCM-C to implement an early determination of diabetes coping strategies.
Resumo:
Delirium is a disorder of acute onset with fluctuating symptoms and is characterized by inattention, disorganized thinking, and altered levels of consciousness. The risk for delirium is greatest in individuals with dementia, and the incidence of both is increasing worldwide because of the aging of our population. Although several clinical trials have tested interventions for delirium prevention in individuals without dementia, little is known about the mechanisms for the prevention of delirium in early-stage Alzheimer’s disease (AD). The purpose of this article is to explore ways of preventing delirium and slowing the rate of cognitive decline in early-stage AD by enhancing cognitive reserve. An agenda for future research on interventions to prevent delirium in individuals with early-stage AD is also presented.
Resumo:
BACKGROUND:Previous epidemiological investigations of associations between dietary glycemic intake and insulin resistance have used average daily measures of glycemic index (GI) and glycemic load (GL). We explored multiple and novel measures of dietary glycemic intake to determine which was most predictive of an association with insulin resistance.METHODS:Usual dietary intakes were assessed by diet history interview in women aged 42-81 years participating in the Longitudinal Assessment of Ageing in Women. Daily measures of dietary glycemic intake (n = 329) were carbohydrate, GI, GL, and GL per megacalorie (GL/Mcal), while meal based measures (n = 200) were breakfast, lunch and dinner GL; and a new measure, GL peak score, to represent meal peaks. Insulin resistant status was defined as a homeostasis model assessment (HOMA) value of >3.99; HOMA as a continuous variable was also investigated.RESULTS:GL, GL/Mcal, carbohydrate (all P < 0.01), GL peak score (P = 0.04) and lunch GL (P = 0.04) were positively and independently associated with insulin resistant status. Daily measures were more predictive than meal-based measures, with minimal difference between GL/Mcal, GL and carbohydrate. No significant associations were observed with HOMA as a continuous variable.CONCLUSION:A dietary pattern with high peaks of GL above the individual's average intake was a significant independent predictor of insulin resistance in this population, however the contribution was less than daily GL and carbohydrate variables. Accounting for energy intake slightly increased the predictive ability of GL, which is potentially important when examining disease risk in more diverse populations with wider variations in energy requirements.
Resumo:
Frontline employee behaviours are recognised as vital for achieving a competitive advantage for service organisations. The services marketing literature has comprehensively examined ways to improve frontline employee behaviours in service delivery and recovery. However, limited attention has been paid to frontline employee behaviours that favour customers in ways that go against organisational norms or rules. This study examines these behaviours by introducing a behavioural concept of Customer-Oriented Deviance (COD). COD is defined as, “frontline employees exhibiting extra-role behaviours that they perceive to defy existing expectations or prescribed rules of higher authority through service adaptation, communication and use of resources to benefit customers during interpersonal service encounters.” This thesis develops a COD measure and examines the key determinants of these behaviours from a frontline employee perspective. Existing research on similar behaviours that has originated in the positive deviance and pro-social behaviour domains has limitations and is considered inadequate to examine COD in the services context. The absence of a well-developed body of knowledge on non-conforming service behaviours has implications for both theory and practice. The provision of ‘special favours’ increases customer satisfaction but the over-servicing of customers is also counterproductive for the service delivery and costly for the organisation. Despite these implications of non-conforming service behaviours, there is little understanding about the nature of these behaviours and its key drivers. This research builds on inadequacies in prior research on positive deviance, pro-social and pro-customer literature to develop the theoretical foundation of COD. The concept of positive deviance which has predominantly been used to study organisational behaviours is applied within a services marketing setting. Further, it addresses previous limitations in pro-social and pro-customer behavioural literature that has examined limited forms of behaviours with no clear understanding on the nature of these behaviours. Building upon these literature streams, this research adopts a holistic approach towards the conceptualisation of COD. It addresses previous shortcomings in the literature by providing a well bounded definition, developing a psychometrically sound measure of COD and a conceptually well-founded model of COD. The concept of COD was examined across three separate studies and based on the theoretical foundations of role theory and social identity theory. Study 1 was exploratory and based on in-depth interviews using the Critical Incident Technique (CIT). The aim of Study 1 was to understand the nature of COD and qualitatively identify its key drivers. Thematic analysis was conducted to analyse the data and the two potential dimensions of COD behaviours of Deviant Service Adaptation (DSA) and Deviant Service Communication (DSC) were revealed in the analysis. In addition, themes representing the potential influences of COD were broadly classified as individual factors, situational factors, and organisational factors. Study 2 was a scale development procedure that involved the generation and purification of items for the measure based on two student samples working in customer service roles (Pilot sample, N=278; Initial validation sample, N=231). The results for the reliability and Exploratory Factor Analyses (EFA) on the pilot sample suggested the scale had poor psychometric properties. As a result, major revisions were made in terms of item wordings and new items were developed based on the literature to reflect a new dimension, Deviant Use of Resources (DUR). The revised items were tested on the initial validation sample with the EFA analysis suggesting a four-factor structure of COD. The aim of Study 3 was to further purify the COD measure and test for nomological validity based on its theoretical relationships with key antecedents and similar constructs (key correlates). The theoretical model of COD consisting of nine hypotheses was tested on a retail and hospitality sample of frontline employees (Retail N=311; Hospitality N=305) of a market research panel using an online survey. The data was analysed using Structural Equation Modelling (SEM). The results provided support for a re-specified second-order three-factor model of COD which consists of 11 items. Overall, the COD measure was found to be reliable and valid, demonstrating convergent validity, discriminant validity and marginal partial invariance for the factor loadings. The results showed support for nomological validity, although the antecedents had differing impact on COD across samples. Specifically, empathy and perspective-taking, role conflict, and job autonomy significantly influenced COD in the retail sample, whereas empathy and perspective-taking, risk-taking propensity and role conflict were significant predictors in the hospitality sample. In addition, customer orientation-selling orientation, the altruistic dimension of organisational citizenship behaviours, workplace deviance, and social desirability responding were found to correlate with COD. This research makes several contributions to theory. First, the findings of this thesis extend the literature on positive deviance, pro-social and pro-customer behaviours. Second, the research provides an empirically tested model which describes the antecedents of COD. Third, this research contributes by providing a reliable and valid measure of COD. Finally, the research investigates the differential effects of the key antecedents in different service sectors on COD. The research findings also contribute to services marketing practice. Based on the research findings, service practitioners can better understand the phenomenon of COD and utilise the measurement tool to calibrate COD levels within their organisations. Knowledge on the key determinants of COD will help improve recruitment and training programs and drive internal initiatives within the firm.
Resumo:
In the wake of recent corporate collapses, 'corporate governance' has received unprecedented levels of attention. It can be narrowly defined as how a company is directed and steered. The responsibility of steering a company is entrusted with the board of directors, who become the focus of governance mechanisms.Yet this is not as straightforward as it appears - Australia has experienced massive shifts in business regulations over the past two decades. One innovation in Australian business regulation is 'enforced self-regulation' which combines the benefits of voluntary self-regulation with the coercive power of the State, implemented via a compliance program. A possible hazard of compliance system is that management might treat this responsibility as a 'box ticking' exercise. Therefore effective governance and compliance entails more than setting up internal and regulatory mechanisms; the willingness of various stakeholders to collaborate is crucial. This suggests that managing relationships between stakeholders of an organization is the key to averting corporate collapses.
Resumo:
Economics education research studies conducted in the UK, USA and Australia to investigate the effects of learning inputs on academic performance have been dominated by the input-output model (Shanahan and Meyer, 2001). In the Student Experience of Learning framework, however, the link between learning inputs and outputs is mediated by students' learning approaches which in turn are influenced by their perceptions of the learning contexts (Evans, Kirby, & Fabrigar, 2003). Many learning inventories such as Biggs' Study Process Questionnaires and Entwistle and Ramsden' Approaches to Study Inventory have been designed to measure approaches to academic learning. However, there is a limitation to using generalised learning inventories in that they tend to aggregate different learning approaches utilised in different assessments. As a result, important relationships between learning approaches and learning outcomes that exist in specific assessment context(s) will be missed (Lizzio, Wilson, & Simons, 2002). This paper documents the construction of an assessment specific instrument to measure learning approaches in economics. The post-dictive validity of the instrument was evaluated by examining the association of learning approaches to students' perceived assessment demand in different assessment contexts.
Resumo:
Hot and cold temperatures significantly increase mortality rates around the world, but which measure of temperature is the best predictor of mortality is not known. We used mortality data from 107 US cities for the years 1987–2000 and examined the association between temperature and mortality using Poisson regression and modelled a non-linear temperature effect and a non-linear lag structure. We examined mean, minimum and maximum temperature with and without humidity, and apparent temperature and the Humidex. The best measure was defined as that with the minimum cross-validated residual. We found large differences in the best temperature measure between age groups, seasons and cities, and there was no one temperature measure that was superior to the others. The strong correlation between different measures of temperature means that, on average, they have the same predictive ability. The best temperature measure for new studies can be chosen based on practical concerns, such as choosing the measure with the least amount of missing data.
Resumo:
Objective: The Brief Michigan Alcoholism Screening Test (bMAST) is a 10-item test derived from the 25-item Michigan Alcoholism Screening Test (MAST). It is widely used in the assessment of alcohol dependence. In the absence of previous validation studies, the principal aim of this study was to assess the validity and reliability of the bMAST as a measure of the severity of problem drinking. Method: There were 6,594 patients (4,854 men, 1,740 women) who had been referred for alcohol-use disorders to a hospital alcohol and drug service who voluntarily participated in this study. Results: An exploratory factor analysis defined a two-factor solution, consisting of Perception of Current Drinking and Drinking Consequences factors. Structural equation modeling confirmed that the fit of a nine-item, two-factor model was superior to the original one-factor model. Concurrent validity was assessed through simultaneous administration of the Alcohol Use Disorders Identification Test (AUDIT) and associations with alcohol consumption and clinically assessed features of alcohol dependence. The two-factor bMAST model showed moderate correlations with the AUDIT. The two-factor bMAST and AUDIT were similarly associated with quantity of alcohol consumption and clinically assessed dependence severity features. No differences were observed between the existing weighted scoring system and the proposed simple scoring system. Conclusions: In this study, both the existing bMAST total score and the two-factor model identified were as effective as the AUDIT in assessing problem drinking severity. There are additional advantages of employing the two-factor bMAST in the assessment and treatment planning of patients seeking treatment for alcohol-use disorders. (J. Stud. Alcohol Drugs 68: 771-779,2007)
Resumo:
Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.
Resumo:
Product placement is a fast growing multi-billion dollar industry yet measures of its effectiveness, which influence the critical area of pricing, have been problematic. Past attempts to measure the effect of a placement, and therefore provide a basis for pricing of placements, have been confounded by the effect on consumers of multiple prior exposures of a brand name in all marketing communications. Virtual product placement offers certain advantages: as a tool to measure the effectiveness of product placements; assistance with the problem of lack of audience selectivity in traditional product placement; testing different audiences for brands and addressing a gap in the existing academic literature by focusing on the impact of product placement on recall and recognition of new brands.
Resumo:
This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.