60 resultados para PROBABILISTIC FORECASTS
Resumo:
This paper presents and discusses the use of Bayesian procedures - introduced through the use of Bayesian networks in Part I of this series of papers - for 'learning' probabilities from data. The discussion will relate to a set of real data on characteristics of black toners commonly used in printing and copying devices. Particular attention is drawn to the incorporation of the proposed procedures as an integral part in probabilistic inference schemes (notably in the form of Bayesian networks) that are intended to address uncertainties related to particular propositions of interest (e.g., whether or not a sample originates from a particular source). The conceptual tenets of the proposed methodologies are presented along with aspects of their practical implementation using currently available Bayesian network software.
Resumo:
A first assessment of debris flow susceptibility at a large scale was performed along the National Road N7, Argentina. Numerous catchments are prone to debris flows and likely to endanger the road-users. A 1:50,000 susceptibility map was created. The use of a DEM (grid 30 m) associated to three complementary criteria (slope, contributing area, curvature) allowed the identification of potential source areas. The debris flow spreading was estimated using a process- and GISbased model (Flow-R) based on basic probabilistic and energy calculations. The best-fit values for the coefficient of friction and the mass-to-drag ratio of the PCM model were found to be ? = 0.02 and M/D = 180 and the resulting propagation on one of the calibration site was validated using the Coulomb friction model. The results are realistic and will be useful to determine which areas need to be prioritized for detailed studies.
Resumo:
An adaptation technique based on the synoptic atmospheric circulation to forecast local precipitation, namely the analogue method, has been implemented for the western Swiss Alps. During the calibration procedure, relevance maps were established for the geopotential height data. These maps highlight the locations were the synoptic circulation was found of interest for the precipitation forecasting at two rain gauge stations (Binn and Les Marécottes) that are located both in the alpine Rhône catchment, at a distance of about 100 km from each other. These two stations are sensitive to different atmospheric circulations. We have observed that the most relevant data for the analogue method can be found where specific atmospheric circulation patterns appear concomitantly with heavy precipitation events. Those skilled regions are coherent with the atmospheric flows illustrated, for example, by means of the back trajectories of air masses. Indeed, the circulation recurrently diverges from the climatology during days with strong precipitation on the southern part of the alpine Rhône catchment. We have found that for over 152 days with precipitation amount above 50 mm at the Binn station, only 3 did not show a trajectory of a southerly flow, meaning that such a circulation was present for 98% of the events. Time evolution of the relevance maps confirms that the atmospheric circulation variables have significantly better forecasting skills close to the precipitation period, and that it seems pointless for the analogue method to consider circulation information days before a precipitation event as a primary predictor. Even though the occurrence of some critical circulation patterns leading to heavy precipitation events can be detected by precursors at remote locations and 1 week ahead (Grazzini, 2007; Martius et al., 2008), time extrapolation by the analogue method seems to be rather poor. This would suggest, in accordance with previous studies (Obled et al., 2002; Bontron and Obled, 2005), that time extrapolation should be done by the Global Circulation Model, which can process atmospheric variables that can be used by the adaptation method.
Resumo:
This paper questions the practitioners' deterministic approach(es) in forensic identification and notes the limits of their conclusions in order to encourage a discussion to question current practices. With this end in view, a hypothetical discussion between an expert in dentistry and an enthusiastic member of a jury, eager to understand the scientific principles of evidence interpretation, is presented. This discussion will lead us to regard any argument aiming at identification as probabilistic.
Resumo:
Abstract This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc. While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively. To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer. Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner. To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms. To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system. At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.
Resumo:
Despite advances in understanding basic organizational principles of the human basal ganglia, accurate in vivo assessment of their anatomical properties is essential to improve early diagnosis in disorders with corticosubcortical pathology and optimize target planning in deep brain stimulation. Main goal of this study was the detailed topological characterization of limbic, associative, and motor subdivisions of the subthalamic nucleus (STN) in relation to corresponding corticosubcortical circuits. To this aim, we used magnetic resonance imaging and investigated independently anatomical connectivity via white matter tracts next to brain tissue properties. On the basis of probabilistic diffusion tractography we identified STN subregions with predominantly motor, associative, and limbic connectivity. We then computed for each of the nonoverlapping STN subregions the covariance between local brain tissue properties and the rest of the brain using high-resolution maps of magnetization transfer (MT) saturation and longitudinal (R1) and transverse relaxation rate (R2*). The demonstrated spatial distribution pattern of covariance between brain tissue properties linked to myelin (R1 and MT) and iron (R2*) content clearly segregates between motor and limbic basal ganglia circuits. We interpret the demonstrated covariance pattern as evidence for shared tissue properties within a functional circuit, which is closely linked to its function. Our findings open new possibilities for investigation of changes in the established covariance pattern aiming at accurate diagnosis of basal ganglia disorders and prediction of treatment outcome.
Resumo:
Given the rate of projected environmental change for the 21st century, urgent adaptation and mitigation measures are required to slow down the on-going erosion of biodiversity. Even though increasing evidence shows that recent human-induced environmental changes have already triggered species' range shifts, changes in phenology and species' extinctions, accurate projections of species' responses to future environmental changes are more difficult to ascertain. This is problematic, since there is a growing awareness of the need to adopt proactive conservation planning measures using forecasts of species' responses to future environmental changes. There is a substantial body of literature describing and assessing the impacts of various scenarios of climate and land-use change on species' distributions. Model predictions include a wide range of assumptions and limitations that are widely acknowledged but compromise their use for developing reliable adaptation and mitigation strategies for biodiversity. Indeed, amongst the most used models, few, if any, explicitly deal with migration processes, the dynamics of population at the "trailing edge" of shifting populations, species' interactions and the interaction between the effects of climate and land-use. In this review, we propose two main avenues to progress the understanding and prediction of the different processes A occurring on the leading and trailing edge of the species' distribution in response to any global change phenomena. Deliberately focusing on plant species, we first explore the different ways to incorporate species' migration in the existing modelling approaches, given data and knowledge limitations and the dual effects of climate and land-use factors. Secondly, we explore the mechanisms and processes happening at the trailing edge of a shifting species' distribution and how to implement them into a modelling approach. We finally conclude this review with clear guidelines on how such modelling improvements will benefit conservation strategies in a changing world. (c) 2007 Rubel Foundation, ETH Zurich. Published by Elsevier GrnbH. All rights reserved.
Resumo:
Almost 30 years ago, Bayesian networks (BNs) were developed in the field of artificial intelligence as a framework that should assist researchers and practitioners in applying the theory of probability to inference problems of more substantive size and, thus, to more realistic and practical problems. Since the late 1980s, Bayesian networks have also attracted researchers in forensic science and this tendency has considerably intensified throughout the last decade. This review article provides an overview of the scientific literature that describes research on Bayesian networks as a tool that can be used to study, develop and implement probabilistic procedures for evaluating the probative value of particular items of scientific evidence in forensic science. Primary attention is drawn here to evaluative issues that pertain to forensic DNA profiling evidence because this is one of the main categories of evidence whose assessment has been studied through Bayesian networks. The scope of topics is large and includes almost any aspect that relates to forensic DNA profiling. Typical examples are inference of source (or, 'criminal identification'), relatedness testing, database searching and special trace evidence evaluation (such as mixed DNA stains or stains with low quantities of DNA). The perspective of the review presented here is not exclusively restricted to DNA evidence, but also includes relevant references and discussion on both, the concept of Bayesian networks as well as its general usage in legal sciences as one among several different graphical approaches to evidence evaluation.
Resumo:
This paper discusses five strategies to deal with five types of errors in Qualitative Comparative Analysis (QCA): condition errors, systematic errors, random errors, calibration errors, and deviant case errors. These strategies are the comparative inspection of complex, intermediary, and parsimonious solutions; the use of an adjustment factor, the use of probabilistic criteria, the test of the robustness of calibration parameters, and the use of a frequency threshold for observed combinations of conditions. The strategies are systematically reviewed, assessed, and evaluated as regards their applicability, advantages, limitations, and complementarities.
Resumo:
Abstract Sitting between your past and your future doesn't mean you are in the present. Dakota Skye Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts. In the first part, we study cellular automata, which are a simple paradigm for distributed computation. Cellular automata are made of basic Boolean computational units, the cells; relying on simple rules and information from- the surrounding cells to perform a global task. The limited visibility of the cells can be modeled as a network, where interactions amongst cells are governed by an underlying structure, usually a regular one. In order to increase the performance of cellular automata, we chose to change its topology. We applied computational principles inspired by Darwinian evolution, called evolutionary algorithms, to alter the system's topological structure starting from either a regular or a random one. The outcome is remarkable, as the resulting topologies find themselves sharing properties of both regular and random network, and display similitudes Watts-Strogtz's small-world network found in social systems. Moreover, the performance and tolerance to probabilistic faults of our small-world like cellular automata surpasses that of regular ones. In the second part, we use the context of biological genetic regulatory networks and, in particular, Kauffman's random Boolean networks model. In some ways, this model is close to cellular automata, although is not expected to perform any task. Instead, it simulates the time-evolution of genetic regulation within living organisms under strict conditions. The original model, though very attractive by it's simplicity, suffered from important shortcomings unveiled by the recent advances in genetics and biology. We propose to use these new discoveries to improve the original model. Firstly, we have used artificial topologies believed to be closer to that of gene regulatory networks. We have also studied actual biological organisms, and used parts of their genetic regulatory networks in our models. Secondly, we have addressed the improbable full synchronicity of the event taking place on. Boolean networks and proposed a more biologically plausible cascading scheme. Finally, we tackled the actual Boolean functions of the model, i.e. the specifics of how genes activate according to the activity of upstream genes, and presented a new update function that takes into account the actual promoting and repressing effects of one gene on another. Our improved models demonstrate the expected, biologically sound, behavior of previous GRN model, yet with superior resistance to perturbations. We believe they are one step closer to the biological reality.
Resumo:
Prior probabilities represent a core element of the Bayesian probabilistic approach to relatedness testing. This letter opinions on the commentary 'Use of prior odds for missing persons identifications' by Budowle et al. (2011), published recently in this journal. Contrary to Budowle et al. (2011), we argue that the concept of prior probabilities (i) is not endowed with the notion of objectivity, (ii) is not a case for computation and (iii) does not require new guidelines edited by the forensic DNA community - as long as probability is properly considered as an expression of personal belief. Please see related article: http://www.investigativegenetics.com/content/3/1/3
The hematology laboratory in blood doping (bd): 2014 update on the athlete biological passport (APB)
Resumo:
Introduction: Blood doping (BD) is the use of Erythropoietic Stimulating Agents (ESAs) and/or transfusion to increase aerobic performance in athletes. Direct toxicologic techniques are insufficient to unmask sophisticated doping protocols. The Hematological module of the ABP (World Anti-Doping Agency), associates decision support technology and expert assessment to indirectly detect BD hematological effects. Methods: The ABP module is based on blood parameters, under strict pre-analytical and analytical rules for collection, storage and transport at 2-12°C, internal and external QC. Accuracy, reproducibility and interlaboratory harmonization fulfill forensic standard. Blood samples are collected in competition and out-ofcompetition. Primary parameters for longitudinal monitoring are: - hemoglobin (HGB); - reticulocyte percentage (RET); - OFF score, indicator of suppressed erythropoiesis, calculated as [HGB(g/L) * 60-√RET%]. Statistical calculation predicts individual expected limits by probabilistic inference. Secondary parameters are RBC, HCT, MCHC-MCH-MCV-RDW-IFR. ABP profiles flagged as atypical are review by experts in hematology, pharmacology, sports medicine or physiology, and classified as: - normal - suspect (to target) - likely due to BD - likely due to pathology. Results: Thousands of athletes worldwide are currently monitored. Since 2010, at least 35 athletes have been sanctioned and others are prosecuted on the sole basis of abnormal ABP, with a 240% increase of positivity to direct tests for ESA, thanks to improved targeting of suspicious athletes (WADA data). Specific doping scenarios have been identified by the Experts (Table and Figure). Figure. Typical HGB and RET profiles in two highly suspicious athletes. A. Sample 2: simultaneous increases in HGB and RET (likely ESA stimulation) in a male. B. Samples 3, 6 and 7: "OFF" picture, with high HGB and low RET in a female. Sample 10: normal HGB and increased RET (ESA or blood withdrawal). Conclusions: ABP is a powerful tool for indirect doping detection, based on the recognition of specific, unphysiological changes triggered by blood doping. The effect of factors of heterogeneity, such as sex and altitude, must also be considered. Schumacher YO, et al. Drug Test Anal 2012, 4:846-853. Sottas PE, et al. Clin Chem 2011, 57:969-976.
Resumo:
In a series of three experiments, participants made inferences about which one of a pair of two objects scored higher on a criterion. The first experiment was designed to contrast the prediction of Probabilistic Mental Model theory (Gigerenzer, Hoffrage, & Kleinbölting, 1991) concerning sampling procedure with the hard-easy effect. The experiment failed to support the theory's prediction that a particular pair of randomly sampled item sets would differ in percentage correct; but the observation that German participants performed practically as well on comparisons between U.S. cities (many of which they did not even recognize) than on comparisons between German cities (about which they knew much more) ultimately led to the formulation of the recognition heuristic. Experiment 2 was a second, this time successful, attempt to unconfound item difficulty and sampling procedure. In Experiment 3, participants' knowledge and recognition of each city was elicited, and how often this could be used to make an inference was manipulated. Choices were consistent with the recognition heuristic in about 80% of the cases when it discriminated and people had no additional knowledge about the recognized city (and in about 90% when they had such knowledge). The frequency with which the heuristic could be used affected the percentage correct, mean confidence, and overconfidence as predicted. The size of the reference class, which was also manipulated, modified these effects in meaningful and theoretically important ways.
Resumo:
The visual cortex in each hemisphere is linked to the opposite hemisphere by axonal projections that pass through the splenium of the corpus callosum. Visual-callosal connections in humans and macaques are found along the V1/V2 border where the vertical meridian is represented. Here we identify the topography of V1 vertical midline projections through the splenium within six human subjects with normal vision using diffusion-weighted MR imaging and probabilistic diffusion tractography. Tractography seed points within the splenium were classified according to their estimated connectivity profiles to topographic subregions of V1, as defined by functional retinotopic mapping. First, we report a ventral-dorsal mapping within the splenium with fibers from ventral V1 (representing the upper visual field) projecting to the inferior-anterior corner of the splenium and fibers from dorsal V1 (representing the lower visual field) projecting to the superior-posterior end. Second, we also report an eccentricity gradient of projections from foveal-to-peripheral V1 subregions running in the anterior-superior to posterior-inferior direction, orthogonal to the dorsal-ventral mapping. These results confirm and add to a previous diffusion MRI study (Dougherty et al., 2005) which identified a dorsal/ventral mapping of human splenial fibers. These findings yield a more detailed view of the structural organization of the splenium than previously reported and offer new opportunities to study structural plasticity in the visual system.
Resumo:
Recently, kernel-based Machine Learning methods have gained great popularity in many data analysis and data mining fields: pattern recognition, biocomputing, speech and vision, engineering, remote sensing etc. The paper describes the use of kernel methods to approach the processing of large datasets from environmental monitoring networks. Several typical problems of the environmental sciences and their solutions provided by kernel-based methods are considered: classification of categorical data (soil type classification), mapping of environmental and pollution continuous information (pollution of soil by radionuclides), mapping with auxiliary information (climatic data from Aral Sea region). The promising developments, such as automatic emergency hot spot detection and monitoring network optimization are discussed as well.