29 resultados para probabilistic ranking
Resumo:
This paper aims at providing a Bayesian parametric framework to tackle the accessibility problem across space in urban theory. Adopting continuous variables in a probabilistic setting we are able to associate with the distribution density to the Kendall's tau index and replicate the general issues related to the role of proximity in a more general context. In addition, by referring to the Beta and Gamma distribution, we are able to introduce a differentiation feature in each spatial unit without incurring in any a-priori definition of territorial units. We are also providing an empirical application of our theoretical setting to study the density distribution of the population across Massachusetts.
Resumo:
This paper proposes MSISpIC, a probabilistic sonar scan matching algorithm for the localization of an autonomous underwater vehicle (AUV). The technique uses range scans gathered with a Mechanical Scanning Imaging Sonar (MSIS), the robot displacement estimated through dead-reckoning using a Doppler velocity log (DVL) and a motion reference unit (MRU). The proposed method is an extension of the pIC algorithm. An extended Kalman filter (EKF) is used to estimate the robot-path during the scan in order to reference all the range and bearing measurements as well as their uncertainty to a scan fixed frame before registering. The major contribution consists of experimentally proving that probabilistic sonar scan matching techniques have the potential to improve the DVL-based navigation. The algorithm has been tested on an AUV guided along a 600 m path within an abandoned marina underwater environment with satisfactory results
Resumo:
The paper discusses maintenance challenges of organisations with a huge number of devices and proposes the use of probabilistic models to assist monitoring and maintenance planning. The proposal assumes connectivity of instruments to report relevant features for monitoring. Also, the existence of enough historical registers with diagnosed breakdowns is required to make probabilistic models reliable and useful for predictive maintenance strategies based on them. Regular Markov models based on estimated failure and repair rates are proposed to calculate the availability of the instruments and Dynamic Bayesian Networks are proposed to model cause-effect relationships to trigger predictive maintenance services based on the influence between observed features and previously documented diagnostics
Resumo:
When dealing with the design of service networks, such as healthand EMS services, banking or distributed ticket selling services, thelocation of service centers has a strong influence on the congestion ateach of them, and consequently, on the quality of service. In this paper,several models are presented to consider service congestion. The firstmodel addresses the issue of the location of the least number of single--servercenters such that all the population is served within a standard distance,and nobody stands in line for a time longer than a given time--limit, or withmore than a predetermined number of other clients. We then formulateseveral maximal coverage models, with one or more servers per service center.A new heuristic is developed to solve the models and tested in a 30--nodesnetwork.
Resumo:
The aim of this paper is twofold: firstly, to carry out a theoreticalreview of the most recent stated preference techniques used foreliciting consumers preferences and, secondly, to compare the empiricalresults of two dierent stated preference discrete choice approaches.They dier in the measurement scale for the dependent variable and,therefore, in the estimation method, despite both using a multinomiallogit. One of the approaches uses a complete ranking of full-profiles(contingent ranking), that is, individuals must rank a set ofalternatives from the most to the least preferred, and the other usesa first-choice rule in which individuals must select the most preferredoption from a choice set (choice experiment). From the results werealize how important the measurement scale for the dependent variablebecomes and, to what extent, procedure invariance is satisfied.
Resumo:
This paper compares two well known scan matching algorithms: the MbICP and the pIC. As a result of the study, it is proposed the MSISpIC, a probabilistic scan matching algorithm for the localization of an Autonomous Underwater Vehicle (AUV). The technique uses range scans gathered with a Mechanical Scanning Imaging Sonar (MSIS), and the robot displacement estimated through dead-reckoning with the help of a Doppler Velocity Log (DVL) and a Motion Reference Unit (MRU). The proposed method is an extension of the pIC algorithm. Its major contribution consists in: 1) using an EKF to estimate the local path traveled by the robot while grabbing the scan as well as its uncertainty and 2) proposing a method to group into a unique scan, with a convenient uncertainty model, all the data grabbed along the path described by the robot. The algorithm has been tested on an AUV guided along a 600m path within a marina environment with satisfactory results
Resumo:
A new model for dealing with decision making under risk by considering subjective and objective information in the same formulation is here presented. The uncertain probabilistic weighted average (UPWA) is also presented. Its main advantage is that it unifies the probability and the weighted average in the same formulation and considering the degree of importance that each case has in the analysis. Moreover, it is able to deal with uncertain environments represented in the form of interval numbers. We study some of its main properties and particular cases. The applicability of the UPWA is also studied and it is seen that it is very broad because all the previous studies that use the probability or the weighted average can be revised with this new approach. Focus is placed on a multi-person decision making problem regarding the selection of strategies by using the theory of expertons.
Resumo:
We describe the version of the GPT planner to be used in the planning competition. This version, called mGPT, solves mdps specified in the ppddllanguage by extracting and using different classes of lower bounds, along with various heuristic-search algorithms. The lower bounds are extracted from deterministic relaxations of the mdp where alternativeprobabilistic effects of an action are mapped into different, independent, deterministic actions. The heuristic-search algorithms, on the other hand, use these lower bounds for focusing the updates and delivering a consistent value function over all states reachable from the initial state with the greedy policy.
Resumo:
In this paper, two probabilistic adaptive algorithmsfor jointly detecting active users in a DS-CDMA system arereported. The first one, which is based on the theory of hiddenMarkov models (HMM’s) and the Baum–Wech (BW) algorithm,is proposed within the CDMA scenario and compared withthe second one, which is a previously developed Viterbi-basedalgorithm. Both techniques are completely blind in the sense thatno knowledge of the signatures, channel state information, ortraining sequences is required for any user. Once convergencehas been achieved, an estimate of the signature of each userconvolved with its physical channel response (CR) and estimateddata sequences are provided. This CR estimate can be used toswitch to any decision-directed (DD) adaptation scheme. Performanceof the algorithms is verified via simulations as well as onexperimental data obtained in an underwater acoustics (UWA)environment. In both cases, performance is found to be highlysatisfactory, showing the near–far resistance of the analyzed algorithms.
Resumo:
Diagnosis of community acquired legionella pneumonia (CALP) is currently performed by means of laboratory techniques which may delay diagnosis several hours. To determine whether ANN can categorize CALP and non-legionella community-acquired pneumonia (NLCAP) and be standard for use by clinicians, we prospectively studied 203 patients with community-acquired pneumonia (CAP) diagnosed by laboratory tests. Twenty one clinical and analytical variables were recorded to train a neural net with two classes (LCAP or NLCAP class). In this paper we deal with the problem of diagnosis, feature selection, and ranking of the features as a function of their classification importance, and the design of a classifier the criteria of maximizing the ROC (Receiving operating characteristics) area, which gives a good trade-off between true positives and false negatives. In order to guarantee the validity of the statistics; the train-validation-test databases were rotated by the jackknife technique, and a multistarting procedure was done in order to make the system insensitive to local maxima.
Resumo:
This paper proposes a pose-based algorithm to solve the full SLAM problem for an autonomous underwater vehicle (AUV), navigating in an unknown and possibly unstructured environment. The technique incorporate probabilistic scan matching with range scans gathered from a mechanical scanning imaging sonar (MSIS) and the robot dead-reckoning displacements estimated from a Doppler velocity log (DVL) and a motion reference unit (MRU). The proposed method utilizes two extended Kalman filters (EKF). The first, estimates the local path travelled by the robot while grabbing the scan as well as its uncertainty and provides position estimates for correcting the distortions that the vehicle motion produces in the acoustic images. The second is an augment state EKF that estimates and keeps the registered scans poses. The raw data from the sensors are processed and fused in-line. No priory structural information or initial pose are considered. The algorithm has been tested on an AUV guided along a 600 m path within a marina environment, showing the viability of the proposed approach
Resumo:
We identify in this paper two conditions that characterize the domain of single-peaked preferences on the line in the following sense: a preference profile satisfies these two properties if and only if there exists a linear order $L$ over the set of alternatives such that these preferences are single-peaked with respect L. The first property states that for any subset of alternatives the set of alternatives considered as the worst by all agents cannot contains more than 2 elements. The second property states that two agents cannot disagree on the relative ranking of two alternatives with respect to a third alternative but agree on the (relative) ranking of a fourth one. Classification-JEL: D71, C78
Resumo:
We analyze the classical Bertrand model when consumers exhibit some strategic behavior in deciding from which seller they will buy. We use two related but different tools. Both consider a probabilistic learning (or evolutionary) mechanism, and in the two of them consumers' behavior in uences the competition between the sellers. The results obtained show that, in general, developing some sort of loyalty is a good strategy for the buyers as it works in their best interest. First, we consider a learning procedure described by a deterministic dynamic system and, using strong simplifying assumptions, we can produce a description of the process behavior. Second, we use nite automata to represent the strategies played by the agents and an adaptive process based on genetic algorithms to simulate the stochastic process of learning. By doing so we can relax some of the strong assumptions used in the rst approach and still obtain the same basic results. It is suggested that the limitations of the rst approach (analytical) provide a good motivation for the second approach (Agent-Based). Indeed, although both approaches address the same problem, the use of Agent-Based computational techniques allows us to relax hypothesis and overcome the limitations of the analytical approach.