144 resultados para Causal networks methodology
Resumo:
This paper presents a new type of very fine grid hydrological model based on the spatiotemporal repartition of a PMP (Probable Maximum Precipitation) and on the topography. The goal is to estimate the influence of this rain on a PMF (Probable Maximum Flood) on a catchment area in Switzerland. The spatiotemporal distribution of the PMP was realized using six clouds modeled by the advection-diffusion equation. The equation shows the movement of the clouds over the terrain and also gives the evolution of the rain intensity in time. This hydrological modeling is followed by a hydraulic modeling of the surface and subterranean flow, done considering the factors that contribute to the hydrological cycle, such as the infiltration, the resurgence and the snowmelt. These added factors make the developed model closer to reality and also offer flexibility in the initial condition that is added to the factors concerning the PMP, such as the duration of the rain, the speed and direction of the wind. All these initial conditions taken together offer a complete image of the PMF.
Resumo:
The scenario considered here is one where brain connectivity is represented as a network and an experimenter wishes to assess the evidence for an experimental effect at each of the typically thousands of connections comprising the network. To do this, a univariate model is independently fitted to each connection. It would be unwise to declare significance based on an uncorrected threshold of α=0.05, since the expected number of false positives for a network comprising N=90 nodes and N(N-1)/2=4005 connections would be 200. Control of Type I errors over all connections is therefore necessary. The network-based statistic (NBS) and spatial pairwise clustering (SPC) are two distinct methods that have been used to control family-wise errors when assessing the evidence for an experimental effect with mass univariate testing. The basic principle of the NBS and SPC is the same as supra-threshold voxel clustering. Unlike voxel clustering, where the definition of a voxel cluster is unambiguous, 'clusters' formed among supra-threshold connections can be defined in different ways. The NBS defines clusters using the graph theoretical concept of connected components. SPC on the other hand uses a more stringent pairwise clustering concept. The purpose of this article is to compare the pros and cons of the NBS and SPC, provide some guidelines on their practical use and demonstrate their utility using a case study involving neuroimaging data.
Resumo:
L'étude porte sur 951 porteurs d'un cancer primaire bucco-pharyngo-laryngé, et révèle un taux de multifocalité du carcinome épidermoïde sur les voies digestive supérieure (bouche-pharynx-oesophage) et aérienne distale (larynx-trachée-bronches) s'élevant à 14,5 %. Les secondes localisations peuvent être simultanées (6,4 %) ou successives (8,1 %) à la découverte du premier cancer: dès la deuxième année du follow-up leur incidence dépasse celle des récidives. Elles se localisent tant au niveau ORL (8,5 %) qu'oesophagien (3 %) ou bronchique (3 %). Le cancer du voile présente un taux de multifocalité particulièrement élevé (51 %). Les auteurs décrivent un type d'endoscopie de dépistage (bucco-pharyngo-oesophago-laryngo-trachéobronchoscopie) dont la fiabilité repose sur une technicité spécifique d'une part, et sur la connaissance des caractéristiques propres aux secondes localisations d'autre part. Ce dépistage systématique permet de détecter les tumeurs secondaires à un stade le plus souvent précoce et encore asymptomatique.
Resumo:
This study was conducted to assess if fingerprint specialists could be influenced by extraneous contextual information during a verification process. Participants were separated into three groups: a control group (no contextual information was given), a low bias group (minimal contextual information was given in the form of a report prompting conclusions), and a high bias group (an internationally recognized fingerprint expert provided conclusions and case information to deceive this group into believing that it was his case and conclusions). A similar experiment was later conducted with laypersons. The results showed that fingerprint experts were influenced by contextual information during fingerprint comparisons, but not towards making errors. Instead, fingerprint experts under the biasing conditions provided significantly fewer definitive and erroneous conclusions than the control group. In contrast, the novice participants were more influenced by the bias conditions and did tend to make incorrect judgments, especially when prompted towards an incorrect response by the bias prompt.
Resumo:
This paper proposes a novel approach for the analysis of illicit tablets based on their visual characteristics. In particular, the paper concentrates on the problem of ecstasy pill seizure profiling and monitoring. The presented method extracts the visual information from pill images and builds a representation of it, i.e. it builds a pill profile based on the pill visual appearance. Different visual features are used to build different image similarity measures, which are the basis for a pill monitoring strategy based on both discriminative and clustering models. The discriminative model permits to infer whether two pills come from the same seizure, while the clustering models groups of pills that share similar visual characteristics. The resulting clustering structure allows to perform a visual identification of the relationships between different seizures. The proposed approach was evaluated using a data set of 621 Ecstasy pill pictures. The results demonstrate that this is a feasible and cost effective method for performing pill profiling and monitoring.
Resumo:
We examine the relationship between structural social capital, resource assembly, and firm performance of entrepreneurs in Africa. We posit that social capital primarily composed of kinship or family ties helps the entrepreneur to raise resources, but it does so at a cost. Using data drawn from small firms in Kampala, Uganda, we explore how shared identity among the entrepreneur's social network moderates this relationship. A large network contributed a higher quantity of resources raised, but at a higher cost when shared identity was high. We discuss the implications of these findings for the role of family ties and social capital in resource assembly, with an emphasis on developing economies.
Resumo:
AIMS/HYPOTHESIS: Epidemiological and experimental evidence suggests that uric acid has a role in the aetiology of type 2 diabetes. Using a Mendelian randomisation approach, we investigated whether there is evidence for a causal role of serum uric acid for development of type 2 diabetes. METHODS: We examined the associations of serum-uric-acid-raising alleles of eight common variants recently identified in genome-wide association studies and summarised this in a genetic score with type 2 diabetes in case-control studies including 7,504 diabetes patients and 8,560 non-diabetic controls. We compared the observed effect size to that expected based on: (1) the association between the genetic score and uric acid levels in non-diabetic controls; and (2) the meta-analysed uric acid level to diabetes association. RESULTS: The genetic score showed a linear association with uric acid levels, with a difference of 12.2 μmol/l (95% CI 9.3, 15.1) by score tertile. No significant associations were observed between the genetic score and potential confounders. No association was observed between the genetic score and type 2 diabetes with an OR of 0.99 (95% CI 0.94, 1.04) per score tertile, significantly different (p = 0.046) from that expected (1.04 [95% CI 1.03, 1.05]) based on the observed uric acid difference by score tertile and the uric acid to diabetes association of 1.21 (95% CI 1.14, 1.29) per 60 μmol/l. CONCLUSIONS/INTERPRETATION: Our results do not support a causal role of serum uric acid for the development of type 2 diabetes and limit the expectation that uric-acid-lowering drugs will be effective in the prevention of type 2 diabetes.
Resumo:
BACKGROUND AND STUDY AIMS: Appropriate use of colonoscopy is a key component of quality management in gastrointestinal endoscopy. In an update of a 1998 publication, the 2008 European Panel on the Appropriateness of Gastrointestinal Endoscopy (EPAGE II) defined appropriateness criteria for various colonoscopy indications. This introductory paper therefore deals with methodology, general appropriateness, and a review of colonoscopy complications. METHODS:The RAND/UCLA Appropriateness Method was used to evaluate the appropriateness of various diagnostic colonoscopy indications, with 14 multidisciplinary experts using a scale from 1 (extremely inappropriate) to 9 (extremely appropriate). Evidence reported in a comprehensive updated literature review was used for these decisions. Consolidation of the ratings into three appropriateness categories (appropriate, uncertain, inappropriate) was based on the median and the heterogeneity of the votes. The experts then met to discuss areas of disagreement in the light of existing evidence, followed by a second rating round, with a subsequent third voting round on necessity criteria, using much more stringent criteria (i. e. colonoscopy is deemed mandatory). RESULTS: Overall, 463 indications were rated, with 55 %, 16 % and 29 % of them being judged appropriate, uncertain and inappropriate, respectively. Perforation and hemorrhage rates, as reported in 39 studies, were in general < 0.1 % and < 0.3 %, respectively CONCLUSIONS: The updated EPAGE II criteria constitute an aid to clinical decision-making but should in no way replace individual judgment. Detailed panel results are freely available on the internet (www.epage.ch) and will thus constitute a reference source of information for clinicians.
Resumo:
Nuclear receptors are a major component of signal transduction in animals. They mediate the regulatory activities of many hormones, nutrients and metabolites on the homeostasis and physiology of cells and tissues. It is of high interest to model the corresponding regulatory networks. While molecular and cell biology studies of individual promoters have provided important mechanistic insight, a more complex picture is emerging from genome-wide studies. The regulatory circuitry of nuclear receptor regulated gene expression networks, and their response to cellular signaling, appear highly dynamic, and involve long as well as short range chromatin interactions. We review how progress in understanding the kinetics and regulation of cofactor recruitment, and the development of new genomic methods, provide opportunities but also a major challenge for modeling nuclear receptor mediated regulatory networks.
Resumo:
Particle physics studies highly complex processes which cannot be directly observed. Scientific realism claims that we are nevertheless warranted in believing that these processes really occur and that the objects involved in them really exist. This dissertation defends a version of scientific realism, called causal realism, in the context of particle physics. I start by introducing the central theses and arguments in the recent philosophical debate on scientific realism (chapter 1), with a special focus on an important presupposition of the debate, namely common sense realism. Chapter 2 then discusses entity realism, which introduces a crucial element into the debate by emphasizing the importance of experiments in defending scientific realism. Most of the chapter is concerned with Ian Hacking's position, but I also argue that Nancy Cartwright's version of entity realism is ultimately preferable as a basis for further development. In chapter 3,1 take a step back and consider the question whether the realism debate is worth pursuing at all. Arthur Fine has given a negative answer to that question, proposing his natural ontologica! attitude as an alternative to both realism and antirealism. I argue that the debate (in particular the realist side of it) is in fact less vicious than Fine presents it. The second part of my work (chapters 4-6) develops, illustrates and defends causal realism. The key idea is that inference to the best explanation is reliable in some cases, but not in others. Chapter 4 characterizes the difference between these two kinds of cases in terms of three criteria which distinguish causal from theoretical warrant. In order to flesh out this distinction, chapter 5 then applies it to a concrete case from the history of particle physics, the discovery of the neutrino. This case study shows that the distinction between causal and theoretical warrant is crucial for understanding what it means to "directly detect" a new particle. But the distinction is also an effective tool against what I take to be the presently most powerful objection to scientific realism: Kyle Stanford's argument from unconceived alternatives. I respond to this argument in chapter 6, and I illustrate my response with a discussion of Jean Perrin's experimental work concerning the atomic hypothesis. In the final part of the dissertation, I turn to the specific challenges posed to realism by quantum theories. One of these challenges comes from the experimental violations of Bell's inequalities, which indicate a failure of locality in the quantum domain. I show in chapter 7 how causal realism can further our understanding of quantum non-locality by taking account of some recent experimental results. Another challenge to realism in quantum mechanics comes from delayed-choice experiments, which seem to imply that certain aspects of what happens in an experiment can be influenced by later choices of the experimenter. Chapter 8 analyzes these experiments and argues that they do not warrant the antirealist conclusions which some commentators draw from them. It pays particular attention to the case of delayed-choice entanglement swapping and the corresponding question whether entanglement is a real physical relation. In chapter 9,1 finally address relativistic quantum theories. It is often claimed that these theories are incompatible with a particle ontology, and this calls into question causal realism's commitment to localizable and countable entities. I defend the commitments of causal realism against these objections, and I conclude with some remarks connecting the interpretation of quantum field theory to more general metaphysical issues confronting causal realism.
Resumo:
Early-onset acquired epileptic aphasia (Landau-Kleffner syndrome) may present as a developmental language disturbance and the affected child may also exhibit autistic features. Landau-Kleffner is now seen as the rare and severe end of a spectrum of cognitive-behavioural symptoms that can be seen in idiopathic (genetic) focal epilepsies of childhood, the benign end being the more frequent typical rolandic epilepsy. Several recent studies show that many children with rolandic epilepsy have minor developmental cognitive and behavioural problems and that some undergo a deterioration (usually temporary) in these domains, the so-called "atypical" forms of the syndrome. The severity and type of deterioration correlate with the site and spread of the epileptic spikes recorded on the electroencephalogram within the perisylvian region, and continuous spike-waves during sleep (CSWS) frequently occur during this period of the epileptic disorder. Some of these children have more severe preexisting communicative and language developmental disorders. If early stagnation or regression occurs in these domains, it presumably reflects epileptic activity in networks outside the perisylvian area, i.e. those involved in social cognition and emotions. Longitudinal studies will be necessary to find out if and how much the bioelectrical abnormalities play a causal role in these subgroup of children with both various degrees of language and autistic regression and features of idiopathic focal epilepsy. One has to remember that it took nearly 40 years to fully acknowledge the epileptic origin of aphasia in Landau-Kleffner syndrome and the milder acquired cognitive problems in rolandic epilepsies.
Resumo:
Almost 30 years ago, Bayesian networks (BNs) were developed in the field of artificial intelligence as a framework that should assist researchers and practitioners in applying the theory of probability to inference problems of more substantive size and, thus, to more realistic and practical problems. Since the late 1980s, Bayesian networks have also attracted researchers in forensic science and this tendency has considerably intensified throughout the last decade. This review article provides an overview of the scientific literature that describes research on Bayesian networks as a tool that can be used to study, develop and implement probabilistic procedures for evaluating the probative value of particular items of scientific evidence in forensic science. Primary attention is drawn here to evaluative issues that pertain to forensic DNA profiling evidence because this is one of the main categories of evidence whose assessment has been studied through Bayesian networks. The scope of topics is large and includes almost any aspect that relates to forensic DNA profiling. Typical examples are inference of source (or, 'criminal identification'), relatedness testing, database searching and special trace evidence evaluation (such as mixed DNA stains or stains with low quantities of DNA). The perspective of the review presented here is not exclusively restricted to DNA evidence, but also includes relevant references and discussion on both, the concept of Bayesian networks as well as its general usage in legal sciences as one among several different graphical approaches to evidence evaluation.
Resumo:
A character network represents relations between characters from a text; the relations are based on text proximity, shared scenes/events, quoted speech, etc. Our project sketches a theoretical framework for character network analysis, bringing together narratology, both close and distant reading approaches, and social network analysis. It is in line with recent attempts to automatise the extraction of literary social networks (Elson, 2012; Sack, 2013) and other studies stressing the importance of character- systems (Woloch, 2003; Moretti, 2011). The method we use to build the network is direct and simple. First, we extract co-occurrences from a book index, without the need for text analysis. We then describe the narrative roles of the characters, which we deduce from their respective positions in the network, i.e. the discourse. As a case study, we use the autobiographical novel Les Confessions by Jean-Jacques Rousseau. We start by identifying co-occurrences of characters in the book index of our edition (Slatkine, 2012). Subsequently, we compute four types of centrality: degree, closeness, betweenness, eigenvector. We then use these measures to propose a typology of narrative roles for the characters. We show that the two parts of Les Confessions, written years apart, are structured around mirroring central figures that bear similar centrality scores. The first part revolves around the mentor of Rousseau; a figure of openness. The second part centres on a group of schemers, depicting a period of deep paranoia. We also highlight characters with intermediary roles: they provide narrative links between the societies in the life of the author. The method we detail in this complete case study of character network analysis can be applied to any work documented by an index. Un réseau de personnages modélise les relations entre les personnages d'un récit : les relations sont basées sur une forme de proximité dans le texte, l'apparition commune dans des événements, des citations dans des dialogues, etc. Notre travail propose un cadre théorique pour l'analyse des réseaux de personnages, rassemblant narratologie, close et distant reading, et analyse des réseaux sociaux. Ce travail prolonge les tentatives récentes d'automatisation de l'extraction de réseaux sociaux tirés de la littérature (Elson, 2012; Sack, 2013), ainsi que les études portant sur l'importance des systèmes de personnages (Woloch, 2003; Moretti, 2011). La méthode que nous utilisons pour construire le réseau est directe et simple. Nous extrayons les co-occurrences d'un index sans avoir recours à l'analyse textuelle. Nous décrivons les rôles narratifs des personnages en les déduisant de leurs positions relatives dans le réseau, donc du discours. Comme étude de cas, nous avons choisi le roman autobiographique Les Confessions, de Jean- Jacques Rousseau. Nous déduisons les co-occurrences entre personnages de l'index présent dans l'édition Slatkine (Rousseau et al., 2012). Sur le réseau obtenu, nous calculons quatre types de centralité : le degré, la proximité, l'intermédiarité et la centralité par vecteur propre. Nous utilisons ces mesures pour proposer une typologie des rôles narratifs des personnages. Nous montrons que les deux parties des Confessions, écrites à deux époques différentes, sont structurées autour de deux figures centrales, qui obtiennent des mesures de centralité similaires. La première partie est construite autour du mentor de Rousseau, qui a symbolisé une grande ouverture. La seconde partie se focalise sur un groupe de comploteurs, et retrace une période marquée par la paranoïa chez l'auteur. Nous mettons également en évidence des personnages jouant des rôles intermédiaires, et de fait procurant un lien narratif entre les différentes sociétés couvrant la vie de l'auteur. La méthode d'analyse des réseaux de personnages que nous décrivons peut être appliquée à tout texte de fiction comportant un index.