346 resultados para Shortest path problem
Resumo:
This paper presents an extension to the Rapidly-exploring Random Tree (RRT) algorithm applied to autonomous, drifting underwater vehicles. The proposed algorithm is able to plan paths that guarantee convergence in the presence of time-varying ocean dynamics. The method utilizes 4-Dimensional, ocean model prediction data as an evolving basis for expanding the tree from the start location to the goal. The performance of the proposed method is validated through Monte-Carlo simulations. Results illustrate the importance of the temporal variance in path execution, and demonstrate the convergence guarantee of the proposed methods.
Resumo:
There is a need for systems which can autonomously perform coverage tasks on large outdoor areas. Unfortunately, the state-of-the-art is to use GPS based localization, which is not suitable for precise operations near trees and other obstructions. In this paper we present a robotic platform for autonomous coverage tasks. The system architecture integrates laser based localization and mapping using the Atlas Framework with Rapidly-Exploring Random Trees path planning and Virtual Force Field obstacle avoidance. We demonstrate the performance of the system in simulation as well as with real world experiments.
Resumo:
This article discusses the design of interactive online activities that introduce problem solving skills to first year law students. They are structured around the narrative framework of ‘Ruby’s Music Festival’ where a young business entrepreneur encounters various issues when organising a music festival and students use a generic problem solving method to provide legal solutions. These online activities offer students the opportunity to obtain early formative feedback on their legal problem solving abilities prior to undertaking a later summative assessment task. The design of the activities around the Ruby narrative framework and the benefits of providing students with early formative feedback will be discussed.
Resumo:
An increasing range of technology services are now offered on a self-service basis. However, problems with self-service technologies (SSTs) occur at times due to the technical error, staff error, or consumers’ own mistakes. Considering the role of consumers as co-producers in the SST context, we aim to study consumer’s behaviours, strategies, and decision making in solving their problem with SST and identify the factors contributing to their persistence in solving the problem. This study contributes to the information systems research, as it is the first study that aims to identify such a process and the factors affecting consumers’ persistence in solving their problem with SST. A focus group with user support staff has been conducted, yielding some initial results that helped to conduct the next phases of the study. Next, using Critical Incident Technique, data will be gathered through focus groups with users, diary method, and think-aloud method.
Resumo:
Objective This study highlights the serious consequences of ignoring reverse causality bias in studies on compensation-related factors and health outcomes and demonstrates a technique for resolving this problem of observational data. Study Design and Setting Data from an English longitudinal study on factors, including claims for compensation, associated with recovery from neck pain (whiplash) after rear-end collisions are used to demonstrate the potential for reverse causality bias. Although it is commonly believed that claiming compensation leads to worse recovery, it is also possible that poor recovery may lead to compensation claims—a point that is seldom considered and never addressed empirically. This pedagogical study compares the association between compensation claiming and recovery when reverse causality bias is ignored and when it is addressed, controlling for the same observable factors. Results When reverse causality is ignored, claimants appear to have a worse recovery than nonclaimants; however, when reverse causality bias is addressed, claiming compensation appears to have a beneficial effect on recovery, ceteris paribus. Conclusion To avert biased policy and judicial decisions that might inadvertently disadvantage people with compensable injuries, there is an urgent need for researchers to address reverse causality bias in studies on compensation-related factors and health.
Resumo:
Initial attempts to obtain lattice based signatures were closely related to reducing a vector modulo the fundamental parallelepiped of a secret basis (like GGH [9], or NTRUSign [12]). This approach leaked some information on the secret, namely the shape of the parallelepiped, which has been exploited on practical attacks [24]. NTRUSign was an extremely efficient scheme, and thus there has been a noticeable interest on developing countermeasures to the attacks, but with little success [6]. In [8] Gentry, Peikert and Vaikuntanathan proposed a randomized version of Babai’s nearest plane algorithm such that the distribution of a reduced vector modulo a secret parallelepiped only depended on the size of the base used. Using this algorithm and generating large, close to uniform, public keys they managed to get provably secure GGH-like lattice-based signatures. Recently, Stehlé and Steinfeld obtained a provably secure scheme very close to NTRUSign [26] (from a theoretical point of view). In this paper we present an alternative approach to seal the leak of NTRUSign. Instead of modifying the lattices and algorithms used, we do a classic leaky NTRUSign signature and hide it with gaussian noise using techniques present in Lyubashevky’s signatures. Our main contributions are thus a set of strong NTRUSign parameters, obtained by taking into account latest known attacks against the scheme, a statistical way to hide the leaky NTRU signature so that this particular instantiation of CVP-based signature scheme becomes zero-knowledge and secure against forgeries, based on the worst-case hardness of the O~(N1.5)-Shortest Independent Vector Problem over NTRU lattices. Finally, we give a set of concrete parameters to gauge the efficiency of the obtained signature scheme.
Resumo:
One of the main challenges facing online and offline path planners is the uncertainty in the magnitude and direction of the environmental energy because it is dynamic, changeable with time, and hard to forecast. This thesis develops an artificial intelligence for a mobile robot to learn from historical or forecasted data of environmental energy available in the area of interest which will help for a persistence monitoring under uncertainty using the developed algorithm.
Resumo:
We show the first deterministic construction of an unconditionally secure multiparty computation (MPC) protocol in the passive adversarial model over black-box non-Abelian groups which is both optimal (secure against an adversary who possesses any t
Resumo:
A number of online algorithms have been developed that have small additional loss (regret) compared to the best “shifting expert”. In this model, there is a set of experts and the comparator is the best partition of the trial sequence into a small number of segments, where the expert of smallest loss is chosen in each segment. The regret is typically defined for worst-case data / loss sequences. There has been a recent surge of interest in online algorithms that combine good worst-case guarantees with much improved performance on easy data. A practically relevant class of easy data is the case when the loss of each expert is iid and the best and second best experts have a gap between their mean loss. In the full information setting, the FlipFlop algorithm by De Rooij et al. (2014) combines the best of the iid optimal Follow-The-Leader (FL) and the worst-case-safe Hedge algorithms, whereas in the bandit information case SAO by Bubeck and Slivkins (2012) competes with the iid optimal UCB and the worst-case-safe EXP3. We ask the same question for the shifting expert problem. First, we ask what are the simple and efficient algorithms for the shifting experts problem when the loss sequence in each segment is iid with respect to a fixed but unknown distribution. Second, we ask how to efficiently unite the performance of such algorithms on easy data with worst-case robustness. A particular intriguing open problem is the case when the comparator shifts within a small subset of experts from a large set under the assumption that the losses in each segment are iid.
Resumo:
Staffing rural and remote schools is an important policy issue for the public good. This paper examines the private issues it also poses for teachers with families working in these communities, as they seek to reconcile careers with educational choices for children. The paper first considers historical responses to staffing rural and remote schools in Australia, and the emergence of neoliberal policy encouraging marketisation of the education sector. We report on interviews about considerations motivating household mobility with 11 teachers across regional, rural and remote communities in Queensland. Like other middle-class parents, these teachers prioritised their children’s educational opportunities over career opportunities. The analysis demonstrates how teachers in rural and remote communities constitute a special group of educational consumers with insider knowledge and unique dilemmas around school choice. Their heightened anxieties around school choice under neoliberal policy are shown to contribute to the public issue of staffing rural and remote schools.
Resumo:
This paper addresses less recognised factors which influence the diffusion of a particular technology. While an innovation’s attributes and performance are paramount, many fail because of external factors which favour an alternative. This paper, with theoretic input from diffusion, lock-in and path-dependency, presents a qualitative study of external factors that influenced the evolution of transportation in USA. This historical account reveals how one technology and its emergent systems become dominant while other choices are overridden by socio-political, economic and technological interests which include not just the manufacturing and service industries associated with the automobile but also government and market stakeholders. Termed here as a large socio-economic regime (LSER),its power in ensuring lock-in and continued path-dependency is shown to pass through three stages, weakening eventually as awareness improves. The study extends to transport trends in China, Korea, Indonesia and Malaysia and they all show the dominant role of an LSER. As transportation policy is increasingly accountable to address both demand and environmental concerns and innovators search for solutions, this paper presents important knowledge for innovators, marketers and policy makers for commercial and societal reasons, especially when negative externalities associated with an incumbent transportation technology may lead to market failure.
Resumo:
Guaranteeing Quality of Service (QoS) with minimum computation cost is the most important objective of cloud-based MapReduce computations. Minimizing the total computation cost of cloud-based MapReduce computations is done through MapReduce placement optimization. MapReduce placement optimization approaches can be classified into two categories: homogeneous MapReduce placement optimization and heterogeneous MapReduce placement optimization. It is generally believed that heterogeneous MapReduce placement optimization is more effective than homogeneous MapReduce placement optimization in reducing the total running cost of cloud-based MapReduce computations. This paper proposes a new approach to the heterogeneous MapReduce placement optimization problem. In this new approach, the heterogeneous MapReduce placement optimization problem is transformed into a constrained combinatorial optimization problem and is solved by an innovative constructive algorithm. Experimental results show that the running cost of the cloud-based MapReduce computation platform using this new approach is 24:3%-44:0% lower than that using the most popular homogeneous MapReduce placement approach, and 2:0%-36:2% lower than that using the heterogeneous MapReduce placement approach not considering the spare resources from the existing MapReduce computations. The experimental results have also demonstrated the good scalability of this new approach.
Resumo:
Trade union membership, both in aggregate numbers and in density, has declined in the majority of advanced economies globally over recent decades (Blanchflower, 2007). In Australia, the decline in the 1990s was somewhat more precipitate than in most countries (Peetz, 1998). As discussed in Chapter 1, reasons for the decline are multifactorial, including a more hostile environment to unionism created by employers and the state, difficulties ·with workplace union organisation, and structural change in the economy (Bryson and Gomez, 2005; Bryson et a!., 2011; Ebbinghaus et al., 2011; Payne, 1989; Waddington and Kerr, 2002; Waddington and Whitson, 1997). Our purpose in this chapter is to look beyond aggregate Australian union density data, to examine how age relates to membership decline, and how different age groups, particularly younger workers, are located in the story of union decline. The practical implications of this research are that understanding how unions relate to workers of different age groups, and to workers of different genders amongst those age groups, may lead to improved recruitment and better union organisation.
Resumo:
Particle Swarm Optimization (PSO) is a biologically inspired computational search and optimization method based on the social behaviors of birds flocking or fish schooling. Although, PSO is represented in solving many well-known numerical test problems, but it suffers from the premature convergence. A number of basic variations have been developed due to solve the premature convergence problem and improve quality of solution founded by the PSO. This study presents a comprehensive survey of the various PSO-based algorithms. As part of this survey, the authors have included a classification of the approaches and they have identify the main features of each proposal. In the last part of the study, some of the topics within this field that are considered as promising areas of future research are listed.
Resumo:
Brain connectivity analyses are increasingly popular for investigating organization. Many connectivity measures including path lengths are generally defined as the number of nodes traversed to connect a node in a graph to the others. Despite its name, path length is purely topological, and does not take into account the physical length of the connections. The distance of the trajectory may also be highly relevant, but is typically overlooked in connectivity analyses. Here we combined genotyping, anatomical MRI and HARDI to understand how our genes influence the cortical connections, using whole-brain tractography. We defined a new measure, based on Dijkstra's algorithm, to compute path lengths for tracts connecting pairs of cortical regions. We compiled these measures into matrices where elements represent the physical distance traveled along tracts. We then analyzed a large cohort of healthy twins and show that our path length measure is reliable, heritable, and influenced even in young adults by the Alzheimer's risk gene, CLU.