30 resultados para probabilistic roadmap


Relevância:

20.00% 20.00%

Publicador:

Resumo:

How do probabilistic models represent their targets and how do they allow us to learn about them? The answer to this question depends on a number of details, in particular on the meaning of the probabilities involved. To classify the options, a minimalist conception of representation (Su\'arez 2004) is adopted: Modelers devise substitutes (``sources'') of their targets and investigate them to infer something about the target. Probabilistic models allow us to infer probabilities about the target from probabilities about the source. This leads to a framework in which we can systematically distinguish between different models of probabilistic modeling. I develop a fully Bayesian view of probabilistic modeling, but I argue that, as an alternative, Bayesian degrees of belief about the target may be derived from ontic probabilities about the source. Remarkably, some accounts of ontic probabilities can avoid problems if they are supposed to apply to sources only.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Derivation of probability estimates complementary to geophysical data sets has gained special attention over the last years. Information about a confidence level of provided physical quantities is required to construct an error budget of higher-level products and to correctly interpret final results of a particular analysis. Regarding the generation of products based on satellite data a common input consists of a cloud mask which allows discrimination between surface and cloud signals. Further the surface information is divided between snow and snow-free components. At any step of this discrimination process a misclassification in a cloud/snow mask propagates to higher-level products and may alter their usability. Within this scope a novel probabilistic cloud mask (PCM) algorithm suited for the 1 km × 1 km Advanced Very High Resolution Radiometer (AVHRR) data is proposed which provides three types of probability estimates between: cloudy/clear-sky, cloudy/snow and clear-sky/snow conditions. As opposed to the majority of available techniques which are usually based on the decision-tree approach in the PCM algorithm all spectral, angular and ancillary information is used in a single step to retrieve probability estimates from the precomputed look-up tables (LUTs). Moreover, the issue of derivation of a single threshold value for a spectral test was overcome by the concept of multidimensional information space which is divided into small bins by an extensive set of intervals. The discrimination between snow and ice clouds and detection of broken, thin clouds was enhanced by means of the invariant coordinate system (ICS) transformation. The study area covers a wide range of environmental conditions spanning from Iceland through central Europe to northern parts of Africa which exhibit diverse difficulties for cloud/snow masking algorithms. The retrieved PCM cloud classification was compared to the Polar Platform System (PPS) version 2012 and Moderate Resolution Imaging Spectroradiometer (MODIS) collection 6 cloud masks, SYNOP (surface synoptic observations) weather reports, Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) vertical feature mask version 3 and to MODIS collection 5 snow mask. The outcomes of conducted analyses proved fine detection skills of the PCM method with results comparable to or better than the reference PPS algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AB A fundamental capacity of the human brain is to learn relations (contingencies) between environmental stimuli and the consequences of their occurrence. Some contingencies are probabilistic; that is, they predict an event in some situations but not in all. Animal studies suggest that damage to limbic structures or the prefrontal cortex may disturb probabilistic learning. The authors studied the learning of probabilistic contingencies in amnesic patients with limbic lesions, patients with prefrontal cortex damage, and healthy controls. Across 120 trials, participants learned contingent relations between spatial sequences and a button press. Amnesic patients had learning comparable to that of control subjects but failed to indicate what they had learned. Across the last 60 trials, amnesic patients and control subjects learned to avoid a noncontingent choice better than frontal patients. These results indicate that probabilistic learning does not depend on the brain structures supporting declarative memory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The CAMbrella coordination action was funded within the Framework Programme 7. Its aim is to provide a research roadmap for clinical and epidemiological research for complementary and alternative medicine (CAM) that is appropriate for the health needs of European citizens and acceptable to their national research institutes and healthcare providers in both public and private sectors. One major issue in the European research agenda is the demographic change and its impact on health care. Our vision for 2020 is that there is an evidence base that enables European citizens to make informed decisions about CAM, both positive and negative. This roadmap proposes a strategic research agenda for the field of CAM designed to address future European health care challenges. This roadmap is based on the results of CAMbrella’s several work packages, literature reviews and expert discussions including a consensus meeting. Methods: We first conducted a systematic literature review on key issues in clinical and epidemiological research in CAM to identify the general concepts, methods and the strengths and weaknesses of current CAM research. These findings were discussed in a workshop (Castellaro, Italy, September 7–9th 2011) with international CAM experts and strategic and methodological recommendations were defined in order to improve the rigor and relevance of CAM research. These recommendations provide the basis for the research roadmap, which was subsequently discussed in a consensus conference (Järna, Sweden, May 9–11th 2012) with all CAMbrella members and the CAMbrella advisory board. The roadmap was revised after this discussion in CAMbrella Work Package (WP) 7 and finally approved by CAMbrella’s scientific steering committee on September 26th 2012. Results: Our main findings show that CAM is very heterogenous in terms of definitions and legal regulations between the European countries. In addition, citizens’ needs and attitudes towards CAM as well as the use and provision of CAM differ significantly between countries. In terms of research methodology, there was consensus that CAM researchers should make use of all the commonly accepted scientific research methods and employ those with utmost diligence combined in a mixed methods framework. Conclusions: We propose 6 core areas of research that should be investigated to achieve a robust knowledge base and to allow stakeholders to make informed decisions. These are: Research into the prevalence of CAM in Europe: Reviews show that we do not know enough about the circumstances in which CAM is used by Europeans. To enable a common European strategic approach, a clear picture of current use is of the utmost importance. Research into differences regarding citizens’ attitudes and needs towards CAM: Citizens are the driver for CAM utilization. Their needs and views on CAM are a key priority, and their interests must be investigated and addressed in future CAM research. Research into safety of CAM: Safety is a key issue for European citizens. CAM is considered safe, but reliable data is scarce although urgently needed in order to assess the risk and cost-benefit ratio of CAM. Research into the comparative effectiveness of CAM: Everybody needs to know in what situation CAM is a reasonable choice. Therefore, we recommend a clear emphasis on concurrent evaluation of the overall effectiveness of CAM as an additional or alternative treatment strategy in real-world settings. Research into effects of context and meaning: The impact of effects of context and meaning on the outcome of CAM treatments must be investigated; it is likely that they are significant. Research into different models of CAM health care integration: There are different models of CAM being integrated into conventional medicine throughout Europe, each with their respective strengths and limitations. These models should be described and concurrently evaluated; innovative models of CAM provision in health care systems should be one focus for CAM research. We also propose a methodological framework for CAM research. We consider that a framework of mixed methodological approaches is likely to yield the most useful information. In this model, all available research strategies including comparative effectiveness research utilising quantitative and qualitative methods should be considered to enable us to secure the greatest density of knowledge possible. Stakeholders, such as citizens, patients and providers, should be involved in every stage of developing the specific and relevant research questions, study design and the assurance of real-world relevance for the research. Furthermore, structural and sufficient financial support for research into CAM is needed to strengthen CAM research capacity if we wish to understand why it remains so popular within the EU. In order to consider employing CAM as part of the solution to the health care, health creation and self-care challenges we face by 2020, it is vital to obtain a robust picture of CAM use and reliable information about its cost, safety and effectiveness in real-world settings. We need to consider the availability, accessibility and affordability of CAM. We need to engage in research excellence and utilise comparative effectiveness approaches and mixed methods to obtain this data. Our recommendations are both strategic and methodological. They are presented for the consideration of researchers and funders while being designed to answer the important and implicit questions posed by EU citizens currently using CAM in apparently increasing numbers. We propose that the EU actively supports an EUwide strategic approach that facilitates the development of CAM research. This could be achieved in the first instance through funding a European CAM coordinating research office dedicated to foster systematic communication between EU governments, public, charitable and industry funders as well as researchers, citizens and other stakeholders. The aim of this office would be to coordinate research strategy developments and research funding opportunities, as well as to document and disseminate international research activities in this field. With the aim to develop sustainability as second step, a European Centre for CAM should be established that takes over the monitoring and further development of a coordinated research strategy for CAM, as well as it should have funds that can be awarded to foster high quality and robust independent research with a focus on citizens health needs and pan-European collaboration. We wish to establish a solid funding for CAM research to adequately inform health care and health creation decision-making throughout the EU. This centre would ensure that our vision of a common, strategic and scientifically rigorous approach to CAM research becomes our legacy and Europe’s reality. We are confident that our recommendations will serve these essential goals for EU citizens.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Record linkage of existing individual health care data is an efficient way to answer important epidemiological research questions. Reuse of individual health-related data faces several problems: Either a unique personal identifier, like social security number, is not available or non-unique person identifiable information, like names, are privacy protected and cannot be accessed. A solution to protect privacy in probabilistic record linkages is to encrypt these sensitive information. Unfortunately, encrypted hash codes of two names differ completely if the plain names differ only by a single character. Therefore, standard encryption methods cannot be applied. To overcome these challenges, we developed the Privacy Preserving Probabilistic Record Linkage (P3RL) method. METHODS In this Privacy Preserving Probabilistic Record Linkage method we apply a three-party protocol, with two sites collecting individual data and an independent trusted linkage center as the third partner. Our method consists of three main steps: pre-processing, encryption and probabilistic record linkage. Data pre-processing and encryption are done at the sites by local personnel. To guarantee similar quality and format of variables and identical encryption procedure at each site, the linkage center generates semi-automated pre-processing and encryption templates. To retrieve information (i.e. data structure) for the creation of templates without ever accessing plain person identifiable information, we introduced a novel method of data masking. Sensitive string variables are encrypted using Bloom filters, which enables calculation of similarity coefficients. For date variables, we developed special encryption procedures to handle the most common date errors. The linkage center performs probabilistic record linkage with encrypted person identifiable information and plain non-sensitive variables. RESULTS In this paper we describe step by step how to link existing health-related data using encryption methods to preserve privacy of persons in the study. CONCLUSION Privacy Preserving Probabilistic Record linkage expands record linkage facilities in settings where a unique identifier is unavailable and/or regulations restrict access to the non-unique person identifiable information needed to link existing health-related data sets. Automated pre-processing and encryption fully protect sensitive information ensuring participant confidentiality. This method is suitable not just for epidemiological research but also for any setting with similar challenges.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, we introduce the probabilistic justification logic PJ, a logic in which we can reason about the probability of justification statements. We present its syntax and semantics, and establish a strong completeness theorem. Moreover, we investigate the relationship between PJ and the logic of uncertain justifications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prediction of psychosis in patients at clinical high risk (CHR) has become a mainstream focus of clinical and research interest worldwide. When using CHR instruments for clinical purposes, the predicted outcome is but only a probability; and, consequently, any therapeutic action following the assessment is based on probabilistic prognostic reasoning. Yet, probabilistic reasoning makes considerable demands on the clinicians. We provide here a scholarly practical guide summarising the key concepts to support clinicians with probabilistic prognostic reasoning in the CHR state. We review risk or cumulative incidence of psychosis in, person-time rate of psychosis, Kaplan-Meier estimates of psychosis risk, measures of prognostic accuracy, sensitivity and specificity in receiver operator characteristic curves, positive and negative predictive values, Bayes’ theorem, likelihood ratios, potentials and limits of real-life applications of prognostic probabilistic reasoning in the CHR state. Understanding basic measures used for prognostic probabilistic reasoning is a prerequisite for successfully implementing the early detection and prevention of psychosis in clinical practice. Future refinement of these measures for CHR patients may actually influence risk management, especially as regards initiating or withholding treatment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract The European Hematology Association (EHA) Roadmap for European Hematology Research highlights major achievements in diagnosis and treatment of blood disorders and identifies the greatest unmet clinical and scientific needs in those areas to enable better funded, more focused European hematology research. Initiated by the EHA, around 300 experts contributed to the consensus document, which will help European policy makers, research funders, research organizations, researchers, and patient groups make better informed decisions on hematology research. It also aims to raise public awareness of the burden of blood disorders on European society, which purely in economic terms is estimated at Euro 23 billion per year, a level of cost that is not matched in current European hematology research funding. In recent decades, hematology research has improved our fundamental understanding of the biology of blood disorders, and has improved diagnostics and treatments, sometimes in revolutionary ways. This progress highlights the potential of focused basic research programs such as this EHA Roadmap. The EHA Roadmap identifies nine sections in hematology: normal hematopoiesis, malignant lymphoid and myeloid diseases, anemias and related diseases, platelet disorders, blood coagulation and hemostatic disorders, transfusion medicine, infections in hematology, and hematopoietic stem cell transplantation. These sections span 60 smaller groups of diseases or disorders. The EHA Roadmap identifies priorities and needs across the field of hematology, including those to develop targeted therapies based on genomic profiling and chemical biology, to eradicate minimal residual malignant disease, and to develop cellular immunotherapies, combination treatments, gene therapies, hematopoietic stem cell treatments, and treatments that are better tolerated by elderly patients. Received December 15, 2015. Accepted January 27, 2016. Copyright © 2016, Ferrata Storti Foundation

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The logic PJ is a probabilistic logic defined by adding (noniterated) probability operators to the basic justification logic J. In this paper we establish upper and lower bounds for the complexity of the derivability problem in the logic PJ. The main result of the paper is that the complexity of the derivability problem in PJ remains the same as the complexity of the derivability problem in the underlying logic J, which is π[p/2] -complete. This implies that the probability operators do not increase the complexity of the logic, although they arguably enrich the expressiveness of the language.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a probabilistic justification logic, PPJ, to study rational belief, degrees of belief and justifications. We establish soundness and completeness for PPJ and show that its satisfiability problem is decidable. In the last part we use PPJ to provide a solution to the lottery paradox.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigated whether a pure perceptual stream is sufficient for probabilistic sequence learning to occur within a single session or whether correlated streams are necessary, whether learning is affected by the transition probability between sequence elements, and how the sequence length influences learning. In each of three experiments, we used six horizontally arranged stimulus displays which consisted of randomly ordered bigrams xo and ox. The probability of the next possible target location out of two was either .50/.50 or .75/.25 and was marked by an underline. In Experiment 1, a left vs. right key response was required for the x of a marked bigram in the pure perceptual learning condition and a response key press corresponding to the marked bigram location (out of 6) was required in the correlated streams condition (i.e., the ring, middle, or index finger of the left and right hand, respectively). The same probabilistic 3-element sequence was used in both conditions. Learning occurred only in the correlated streams condition. In Experiment 2, we investigated whether sequence length affected learning correlated sequences by contrasting the 3-elements sequence with a 6-elements sequence. Significant sequence learning occurred in all conditions. In Experiment 3, we removed a potential confound, that is, the sequence of hand changes. Under these conditions, learning occurred for the 3-element sequence only and transition probability did not affect the amount of learning. Together, these results indicate that correlated streams are necessary for probabilistic sequence learning within a single session and that sequence length can reduce the chances for learning to occur.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The marine cycle of calcium carbonate (CaCO3) is an important element of the carbon cycle and co-governs the distribution of carbon and alkalinity within the ocean. However, CaCO3 export fluxes and mechanisms governing CaCO3 dissolution are highly uncertain. We present an observationally constrained, probabilistic assessment of the global and regional CaCO3 budgets. Parameters governing pelagic CaCO3 export fluxes and dissolution rates are sampled using a Monte Carlo scheme to construct a 1000-member ensemble with the Bern3D ocean model. Ensemble results are constrained by comparing simulated and observation-based fields of excess dissolved calcium carbonate (TA*). The minerals calcite and aragonite are modelled explicitly and ocean–sediment fluxes are considered. For local dissolution rates, either a strong or a weak dependency on CaCO3 saturation is assumed. In addition, there is the option to have saturation-independent dissolution above the saturation horizon. The median (and 68 % confidence interval) of the constrained model ensemble for global biogenic CaCO3 export is 0.90 (0.72–1.05) Gt C yr−1, that is within the lower half of previously published estimates (0.4–1.8 Gt C yr−1). The spatial pattern of CaCO3 export is broadly consistent with earlier assessments. Export is large in the Southern Ocean, the tropical Indo–Pacific, the northern Pacific and relatively small in the Atlantic. The constrained results are robust across a range of diapycnal mixing coefficients and, thus, ocean circulation strengths. Modelled ocean circulation and transport timescales for the different set-ups were further evaluated with CFC11 and radiocarbon observations. Parameters and mechanisms governing dissolution are hardly constrained by either the TA* data or the current compilation of CaCO3 flux measurements such that model realisations with and without saturation-dependent dissolution achieve skill. We suggest applying saturation-independent dissolution rates in Earth system models to minimise computational costs.