34 resultados para Logic and Probabilistic Models
Resumo:
Most criticism about homeopathy concerns the lack of a scientific basis and theoretical models. In order to be accepted as a valid part of medical practice, a wellstructured research strategy for homeopathy is needed. This is often hampered by methodological problems as well as by gross underinvestment in the required academic resources. Fundamental research could make important contributions to our understanding of the homeopathic and high dilutions mechanisms of action. Since the pioneering works of Kolisko on wheat germination (Kolisko, 1923) and Junker on growth of microorganisms (paramecium, yeast, fungi) (Junker, 1928), a number of experiments have been performed either with healthy organisms (various physiological aspects of growth) or with artificially diseased organisms, which may react more markedly to homeopathic treatments than healthy ones. In the latter case, the preliminary stress may be either abiotic, e.g. heavy metals, or biotic, e.g. fungal and viral pathogens or nematode infection. Research has also been carried out into the applicability of homeopathic principles to crop growth and disease control (agrohomeopathy): because of the extreme dilutions used, the environmental impact is low and such treatments are well suited to the holistic approach of sustainable agriculture (Betti et al., 2006). Unfortunately, as Scofield reported in an extensive critical review (Scofield, 1984), there is little firm evidence to support the reliability of the reported results, due to poor experimental methodology and inadequate statistical analysis. Moreover, since there is no agricultural homeopathic pharmacopoeia, much work is required to find suitable remedies, potencies and dose levels.
Resumo:
BACKGROUND Record linkage of existing individual health care data is an efficient way to answer important epidemiological research questions. Reuse of individual health-related data faces several problems: Either a unique personal identifier, like social security number, is not available or non-unique person identifiable information, like names, are privacy protected and cannot be accessed. A solution to protect privacy in probabilistic record linkages is to encrypt these sensitive information. Unfortunately, encrypted hash codes of two names differ completely if the plain names differ only by a single character. Therefore, standard encryption methods cannot be applied. To overcome these challenges, we developed the Privacy Preserving Probabilistic Record Linkage (P3RL) method. METHODS In this Privacy Preserving Probabilistic Record Linkage method we apply a three-party protocol, with two sites collecting individual data and an independent trusted linkage center as the third partner. Our method consists of three main steps: pre-processing, encryption and probabilistic record linkage. Data pre-processing and encryption are done at the sites by local personnel. To guarantee similar quality and format of variables and identical encryption procedure at each site, the linkage center generates semi-automated pre-processing and encryption templates. To retrieve information (i.e. data structure) for the creation of templates without ever accessing plain person identifiable information, we introduced a novel method of data masking. Sensitive string variables are encrypted using Bloom filters, which enables calculation of similarity coefficients. For date variables, we developed special encryption procedures to handle the most common date errors. The linkage center performs probabilistic record linkage with encrypted person identifiable information and plain non-sensitive variables. RESULTS In this paper we describe step by step how to link existing health-related data using encryption methods to preserve privacy of persons in the study. CONCLUSION Privacy Preserving Probabilistic Record linkage expands record linkage facilities in settings where a unique identifier is unavailable and/or regulations restrict access to the non-unique person identifiable information needed to link existing health-related data sets. Automated pre-processing and encryption fully protect sensitive information ensuring participant confidentiality. This method is suitable not just for epidemiological research but also for any setting with similar challenges.
Resumo:
INTRODUCTION This paper focuses exclusively on experimental models with ultra high dilutions (i.e. beyond 10(-23)) that have been submitted to replication scrutiny. It updates previous surveys, considers suggestions made by the research community and compares the state of replication in 1994 with that in 2015. METHODS Following literature research, biochemical, immunological, botanical, cell biological and zoological studies on ultra high dilutions (potencies) were included. Reports were grouped into initial studies, laboratory-internal, multicentre and external replications. Repetition could yield either comparable, or zero, or opposite results. The null-hypothesis was that test and control groups would not be distinguishable (zero effect). RESULTS A total of 126 studies were found. From these, 28 were initial studies. When all 98 replicative studies were considered, 70.4% (i.e. 69) reported a result comparable to that of the initial study, 20.4% (20) zero effect and 9.2% (9) an opposite result. Both for the studies until 1994 and the studies 1995-2015 the null-hypothesis (dominance of zero results) should be rejected. Furthermore, the odds of finding a comparable result are generally higher than of finding an opposite result. Although this is true for all three types of replication studies, the fraction of comparable studies diminishes from laboratory-internal (total 82.9%) to multicentre (total 75%) to external (total 48.3%), while the fraction of opposite results was 4.9%, 10.7% and 13.8%. Furthermore, it became obvious that the probability of an external replication producing comparable results is bigger for models that had already been further scrutinized by the initial researchers. CONCLUSIONS We found 28 experimental models which underwent replication. In total, 24 models were replicated with comparable results, 12 models with zero effect, and 6 models with opposite results. Five models were externally reproduced with comparable results. We encourage further replications of studies in order to learn more about the model systems used.
Resumo:
Despite the strong increase in observational data on extrasolar planets, the processes that led to the formation of these planets are still not well understood. However, thanks to the high number of extrasolar planets that have been discovered, it is now possible to look at the planets as a population that puts statistical constraints on theoretical formation models. A method that uses these constraints is planetary population synthesis where synthetic planetary populations are generated and compared to the actual population. The key element of the population synthesis method is a global model of planet formation and evolution. These models directly predict observable planetary properties based on properties of the natal protoplanetary disc, linking two important classes of astrophysical objects. To do so, global models build on the simplified results of many specialized models that address one specific physical mechanism. We thoroughly review the physics of the sub-models included in global formation models. The sub-models can be classified as models describing the protoplanetary disc (of gas and solids), those that describe one (proto)planet (its solid core, gaseous envelope and atmosphere), and finally those that describe the interactions (orbital migration and N-body interaction). We compare the approaches taken in different global models, discuss the links between specialized and global models, and identify physical processes that require improved descriptions in future work. We then shortly address important results of planetary population synthesis like the planetary mass function or the mass-radius relationship. With these statistical results, the global effects of physical mechanisms occurring during planet formation and evolution become apparent, and specialized models describing them can be put to the observational test. Owing to their nature as meta models, global models depend on the results of specialized models, and therefore on the development of the field of planet formation theory as a whole. Because there are important uncertainties in this theory, it is likely that the global models will in future undergo significant modifications. Despite these limitations, global models can already now yield many testable predictions. With future global models addressing the geophysical characteristics of the synthetic planets, it should eventually become possible to make predictions about the habitability of planets based on their formation and evolution.