111 resultados para Probabilistic robotics


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The aim of this report is to describe the main characteristics of the design, including response rates, of the Cornella Health Interview Survey Follow-up Study. Methods: The original cohort consisted of 2,500 subjects (1,263 women and 1,237 men) interviewed as part of the 1994 Cornella Health Interview Study. A record linkage to update the address and vital status of the cohort members was carried out using, first a deterministic method, and secondly a probabilistic one, based on each subject's first name and surnames. Subsequently, we attempted to locate the cohort members to conduct the phone follow-up interviews. A pilot study was carried out to test the overall feasibility and to modify some procedures before the field work began. Results: After record linkage, 2,468 (98.7%) subjects were successfully traced. Of these, 91 (3.6%) were deceased, 259 (10.3%) had moved to other towns, and 50 (2.0%) had neither renewed their last municipal census documents nor declared having moved. After using different strategies to track and to retain cohort members, we traced 92% of the CHIS participants. From them, 1,605 subjects answered the follow-up questionnaire. Conclusion: The computerized record linkage maximized the success of the follow-up that was carried out 7 years after the baseline interview. The pilot study was useful to increase the efficiency in tracing and interviewing the respondents.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, there has been an increased interest on the neural mechanisms underlying perceptual decision making. However, the effect of neuronal adaptation in this context has not yet been studied. We begin our study by investigating how adaptation can bias perceptual decisions. We considered behavioral data from an experiment on high-level adaptation-related aftereffects in a perceptual decision task with ambiguous stimuli on humans. To understand the driving force behind the perceptual decision process, a biologically inspired cortical network model was used. Two theoretical scenarios arose for explaining the perceptual switch from the category of the adaptor stimulus to the opposite, nonadapted one. One is noise-driven transition due to the probabilistic spike times of neurons and the other is adaptation-driven transition due to afterhyperpolarization currents. With increasing levels of neural adaptation, the system shifts from a noise-driven to an adaptation-driven modus. The behavioral results show that the underlying model is not just a bistable model, as usual in the decision-making modeling literature, but that neuronal adaptation is high and therefore the working point of the model is in the oscillatory regime. Using the same model parameters, we studied the effect of neural adaptation in a perceptual decision-making task where the same ambiguous stimulus was presented with and without a preceding adaptor stimulus. We find that for different levels of sensory evidence favoring one of the two interpretations of the ambiguous stimulus, higher levels of neural adaptation lead to quicker decisions contributing to a speed–accuracy trade off.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wireless “MIMO” systems, employing multiple transmit and receive antennas, promise a significant increase of channel capacity, while orthogonal frequency-division multiplexing (OFDM) is attracting a good deal of attention due to its robustness to multipath fading. Thus, the combination of both techniques is an attractive proposition for radio transmission. The goal of this paper is the description and analysis of a new and novel pilot-aided estimator of multipath block-fading channels. Typical models leading to estimation algorithms assume the number of multipath components and delays to be constant (and often known), while their amplitudes are allowed to vary with time. Our estimator is focused instead on the more realistic assumption that the number of channel taps is also unknown and varies with time following a known probabilistic model. The estimation problem arising from these assumptions is solved using Random-Set Theory (RST), whereby one regards the multipath-channel response as a single set-valued random entity.Within this framework, Bayesian recursive equations determine the evolution with time of the channel estimator. Due to the lack of a closed form for the solution of Bayesian equations, a (Rao–Blackwellized) particle filter (RBPF) implementation ofthe channel estimator is advocated. Since the resulting estimator exhibits a complexity which grows exponentially with the number of multipath components, a simplified version is also introduced. Simulation results describing the performance of our channel estimator demonstrate its effectiveness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this study consists in quantifying in money terms the potential reduction in usage of public health care outlets associated to the tenure of double (public plus private) insurance. In order to address the problem, a probabilistic model for visits to physicians is specified and estimated using data from the Catalonian Health Survey. Also, a model for the marginal cost of a visit to a physician is estimated using data from a representative sample of fee-for-service payments from a major insurer. Combining the estimates from the two models it is possible to quantify in money terms the cost/savings of alternative policies which bear an impact on the adoption of double insurance by the population. The results suggest that the private sector absorbs an important volume of demand which would be re-directed to the public sector if consumers cease to hold double insurance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sobriety checkpoints are not usually randomly located by traffic authorities. As such, information provided by non-random alcohol tests cannot be used to infer the characteristics of the general driving population. In this paper a case study is presented in which the prevalence of alcohol-impaired driving is estimated for the general population of drivers. A stratified probabilistic sample was designed to represent vehicles circulating in non-urban areas of Catalonia (Spain), a region characterized by its complex transportation network and dense traffic around the metropolis of Barcelona. Random breath alcohol concentration tests were performed during spring 2012 on 7,596 drivers. The estimated prevalence of alcohol-impaired drivers was 1.29%, which is roughly a third of the rate obtained in non-random tests. Higher rates were found on weekends (1.90% on Saturdays, 4.29% on Sundays) and especially at night. The rate is higher for men (1.45%) than for women (0.64%) and the percentage of positive outcomes shows an increasing pattern with age. In vehicles with two occupants, the proportion of alcohol-impaired drivers is estimated at 2.62%, but when the driver was alone the rate drops to 0.84%, which might reflect the socialization of drinking habits. The results are compared with outcomes in previous surveys, showing a decreasing trend in the prevalence of alcohol-impaired drivers over time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aquest projecte pretén presentar de forma clara i detallada l’estructura i el funcionament del robot així com dels components que el conformen. Aquesta informació és de vital importància a l’hora de desenvolupar aplicacions per al robot. Un cop descrites les característiques del robot s’analitzaran les eines necessàries i/o disponibles per poder desenvolupar programari per cada nivell de la forma més senzilla i eficient possible. Posteriorment s’analitzaran els diferents nivells de programació i se’n contrastaran els avantatges i els inconvenients de cada un. Aquest anàlisi es començarà fent pel nivell més alt i anirà baixant amb la intenció de no entrar en nivells més baixos del necessari. Baixar un nivell en la programació suposa haver de crear aplicacions sempre compatibles amb els nivells superiors de forma que com més es baixa més augmenta la complexitat. A partir d’aquest anàlisi s’ha arribat a la conclusió que per tal d’aprofitar totes les prestacions del robot és precís arribar a programar en el nivell més baix del robot. Finalment l’objectiu és obtenir una sèrie de programes per cada nivell que permetin controlar el robot i fer-lo seguir senzilles trajectòries

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this study consists in quantifying in money terms thepotential reduction in usage of public health care outlets associatedto the tenure of double (public plus private) insurance. In order to address the problem, a probabilistic model for visits to physicians is specified and estimated using data from the Catalonian Health Survey. Also, a model for the marginal cost of a visit to a physician is estimated using data from a representative sample of fee-for-service payments from a major insurer. Combining the estimates from the two models it is possible to quantify in money terms the cost/savings of alternative policies which bear an impact on the adoption of double insurance by the population. The results suggest that the private sector absorbs an important volumeof demand which would be re-directed to the public sector if consumerscease to hold double insurance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scoring rules that elicit an entire belief distribution through the elicitation of point beliefsare time-consuming and demand considerable cognitive e¤ort. Moreover, the results are validonly when agents are risk-neutral or when one uses probabilistic rules. We investigate a classof rules in which the agent has to choose an interval and is rewarded (deterministically) onthe basis of the chosen interval and the realization of the random variable. We formulatean e¢ ciency criterion for such rules and present a speci.c interval scoring rule. For single-peaked beliefs, our rule gives information about both the location and the dispersion of thebelief distribution. These results hold for all concave utility functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we consider an insider with privileged information thatis affected by an independent noise vanishing as the revelation timeapproaches. At this time, information is available to every trader. Ourfinancial markets are based on Wiener space. In probabilistic terms weobtain an infinite dimensional extension of Jacod s theorem to covercases of progressive enlargement of filtrations. The application ofthis result gives the semimartingale decomposition of the originalWiener process under the progressively enlarged filtration. As anapplication we prove that if the rate at which the additional noise inthe insider s information vanishes is slow enough then there is noarbitrage and the additional utility of the insider is finite.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper explores three aspects of strategic uncertainty: its relation to risk, predictability of behavior and subjective beliefs of players. In a laboratory experiment we measure subjects certainty equivalents for three coordination games and one lottery. Behavior in coordination games is related to risk aversion, experience seeking, and age.From the distribution of certainty equivalents we estimate probabilities for successful coordination in a wide range of games. For many games, success of coordination is predictable with a reasonable error rate. The best response to observed behavior is close to the global-game solution. Comparing choices in coordination games with revealed risk aversion, we estimate subjective probabilities for successful coordination. In games with a low coordination requirement, most subjects underestimate the probability of success. In games with a high coordination requirement, most subjects overestimate this probability. Estimating probabilistic decision models, we show that the quality of predictions can be improved when individual characteristics are taken into account. Subjects behavior is consistent with probabilistic beliefs about the aggregate outcome, but inconsistent with probabilistic beliefs about individual behavior.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Much of empirical economics involves regression analysis. However, does thepresentation of results affect economists ability to make inferences for decision makingpurposes? In a survey, 257 academic economists were asked to make probabilisticinferences on the basis of the outputs of a regression analysis presented in a standardformat. Questions concerned the distribution of the dependent variable conditional onknown values of the independent variable. However, many respondents underestimateduncertainty by failing to take into account the standard deviation of the estimatedresiduals. The addition of graphs did not substantially improve inferences. On the otherhand, when only graphs were provided (i.e., with no statistics), respondents weresubstantially more accurate. We discuss implications for improving practice in reportingresults of regression analyses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Whereas much literature has documented difficulties in making probabilistic inferences, it hasalso emphasized the importance of task characteristics in determining judgmental accuracy.Noting that people exhibit remarkable efficiency in encoding frequency information sequentially,we construct tasks that exploit this ability by requiring people to experience the outcomes ofsequentially simulated data. We report two experiments. The first involved seven well-knownprobabilistic inference tasks. Participants differed in statistical sophistication and answered withand without experience obtained through sequentially simulated outcomes in a design thatpermitted both between- and within-subject analyses. The second experiment involvedinterpreting the outcomes of a regression analysis when making inferences for investmentdecisions. In both experiments, even the statistically naïve make accurate probabilistic inferencesafter experiencing sequentially simulated outcomes and many prefer this presentation format. Weconclude by discussing theoretical and practical implications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Experiments in which subjects play simultaneously several finite prisoner's dilemma supergames with and without an outside optionreveal that: (i) subjects use probabilistic start and endeffect behaviour, (ii) the freedom to choose whether to play the prisoner's dilemma game enhances cooperation, (iii) if the payoff for simultaneous defection is negative, subjects' tendency to avoid losses leads them to cooperate; while this tendency makes them stick to mutual defection if its payoff is positive.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several studies have reported high performance of simple decision heuristics multi-attribute decision making. In this paper, we focus on situations where attributes are binary and analyze the performance of Deterministic-Elimination-By-Aspects (DEBA) and similar decision heuristics. We consider non-increasing weights and two probabilistic models for the attribute values: one where attribute values are independent Bernoulli randomvariables; the other one where they are binary random variables with inter-attribute positive correlations. Using these models, we show that good performance of DEBA is explained by the presence of cumulative as opposed to simple dominance. We therefore introduce the concepts of cumulative dominance compliance and fully cumulative dominance compliance and show that DEBA satisfies those properties. We derive a lower bound with which cumulative dominance compliant heuristics will choose a best alternative and show that, even with many attributes, this is not small. We also derive an upper bound for the expected loss of fully cumulative compliance heuristics and show that this is moderateeven when the number of attributes is large. Both bounds are independent of the values ofthe weights.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Why was England first? And why Europe? We present a probabilistic model that builds on big-push models by Murphy, Shleifer and Vishny (1989), combined with hierarchical preferences. The interaction of exogenous demographic factors (in particular the English low-pressure variant of the European marriage pattern)and redistributive institutions such as the old Poor Law combined to make an Industrial Revolution more likely. Essentially, industrialization is the result of having a critical mass of consumers that is rich enough to afford (potentially) mass-produced goods. Our model is then calibrated to match the main characteristics of the English economy in 1750 and the observed transition until 1850.This allows us to address explicitly one of the key features of the British IndustrialRevolution unearthed by economic historians over the last three decades the slowness of productivity and output change. In our calibration, we find that the probability of Britain industrializing is 5 times larger than France s. Contrary to the recent argument by Pomeranz, China in the 18th century had essentially no chance to industrialize at all. This difference is decomposed into a demographic and a policy component, with the former being far more important than the latter.