945 resultados para Moretti, Franco: Graphs, Maps, Trees. Abstract models for a literaty theory
Resumo:
A complex network is an abstract representation of an intricate system of interrelated elements where the patterns of connection hold significant meaning. One particular complex network is a social network whereby the vertices represent people and edges denote their daily interactions. Understanding social network dynamics can be vital to the mitigation of disease spread as these networks model the interactions, and thus avenues of spread, between individuals. To better understand complex networks, algorithms which generate graphs exhibiting observed properties of real-world networks, known as graph models, are often constructed. While various efforts to aid with the construction of graph models have been proposed using statistical and probabilistic methods, genetic programming (GP) has only recently been considered. However, determining that a graph model of a complex network accurately describes the target network(s) is not a trivial task as the graph models are often stochastic in nature and the notion of similarity is dependent upon the expected behavior of the network. This thesis examines a number of well-known network properties to determine which measures best allowed networks generated by different graph models, and thus the models themselves, to be distinguished. A proposed meta-analysis procedure was used to demonstrate how these network measures interact when used together as classifiers to determine network, and thus model, (dis)similarity. The analytical results form the basis of the fitness evaluation for a GP system used to automatically construct graph models for complex networks. The GP-based automatic inference system was used to reproduce existing, well-known graph models as well as a real-world network. Results indicated that the automatically inferred models exemplified functional similarity when compared to their respective target networks. This approach also showed promise when used to infer a model for a mammalian brain network.
Resumo:
Applications such as neuroscience, telecommunication, online social networking, transport and retail trading give rise to connectivity patterns that change over time. In this work, we address the resulting need for network models and computational algorithms that deal with dynamic links. We introduce a new class of evolving range-dependent random graphs that gives a tractable framework for modelling and simulation. We develop a spectral algorithm for calibrating a set of edge ranges from a sequence of network snapshots and give a proof of principle illustration on some neuroscience data. We also show how the model can be used computationally and analytically to investigate the scenario where an evolutionary process, such as an epidemic, takes place on an evolving network. This allows us to study the cumulative effect of two distinct types of dynamics.
Resumo:
This paper presents a new method to calculate sky view factors (SVFs) from high resolution urban digital elevation models using a shadow casting algorithm. By utilizing weighted annuli to derive SVF from hemispherical images, the distance light source positions can be predefined and uniformly spread over the whole hemisphere, whereas another method applies a random set of light source positions with a cosine-weighted distribution of sun altitude angles. The 2 methods have similar results based on a large number of SVF images. However, when comparing variations at pixel level between an image generated using the new method presented in this paper with the image from the random method, anisotropic patterns occur. The absolute mean difference between the 2 methods is 0.002 ranging up to 0.040. The maximum difference can be as much as 0.122. Since SVF is a geometrically derived parameter, the anisotropic errors created by the random method must be considered as significant.
Resumo:
BIOME 6000 is an international project to map vegetation globally at mid-Holocene (6000 14C yr bp) and last glacial maximum (LGM, 18,000 14C yr bp), with a view to evaluating coupled climate-biosphere model results. Primary palaeoecological data are assigned to biomes using an explicit algorithm based on plant functional types. This paper introduces the second Special Feature on BIOME 6000. Site-based global biome maps are shown with data from North America, Eurasia (except South and Southeast Asia) and Africa at both time periods. A map based on surface samples shows the method’s skill in reconstructing present-day biomes. Cold and dry conditions at LGM favoured extensive tundra and steppe. These biomes intergraded in northern Eurasia. Northern hemisphere forest biomes were displaced southward. Boreal evergreen forests (taiga) and temperate deciduous forests were fragmented, while European and East Asian steppes were greatly extended. Tropical moist forests (i.e. tropical rain forest and tropical seasonal forest) in Africa were reduced. In south-western North America, desert and steppe were replaced by open conifer woodland, opposite to the general arid trend but consistent with modelled southward displacement of the jet stream. The Arctic forest limit was shifted slighly north at 6000 14C yr bp in some sectors, but not in all. Northern temperate forest zones were generally shifted greater distances north. Warmer winters as well as summers in several regions are required to explain these shifts. Temperate deciduous forests in Europe were greatly extended, into the Mediterranean region as well as to the north. Steppe encroached on forest biomes in interior North America, but not in central Asia. Enhanced monsoons extended forest biomes in China inland and Sahelian vegetation into the Sahara while the African tropical rain forest was also reduced, consistent with a modelled northward shift of the ITCZ and a more seasonal climate in the equatorial zone. Palaeobiome maps show the outcome of separate, independent migrations of plant taxa in response to climate change. The average composition of biomes at LGM was often markedly different from today. Refugia for the temperate deciduous and tropical rain forest biomes may have existed offshore at LGM, but their characteristic taxa also persisted as components of other biomes. Examples include temperate deciduous trees that survived in cool mixed forest in eastern Europe, and tropical evergreen trees that survived in tropical seasonal forest in Africa. The sequence of biome shifts during a glacial-interglacial cycle may help account for some disjunct distributions of plant taxa. For example, the now-arid Saharan mountains may have linked Mediterranean and African tropical montane floras during enhanced monsoon regimes. Major changes in physical land-surface conditions, shown by the palaeobiome data, have implications for the global climate. The data can be used directly to evaluate the output of coupled atmosphere-biosphere models. The data could also be objectively generalized to yield realistic gridded land-surface maps, for use in sensitivity experiments with atmospheric models. Recent analyses of vegetation-climate feedbacks have focused on the hypothesized positive feedback effects of climate-induced vegetation changes in the Sahara/Sahel region and the Arctic during the mid-Holocene. However, a far wider spectrum of interactions potentially exists and could be investigated, using these data, both for 6000 14C yr bp and for the LGM.
Predictive models for chronic renal disease using decision trees, naïve bayes and case-based methods
Resumo:
Data mining can be used in healthcare industry to “mine” clinical data to discover hidden information for intelligent and affective decision making. Discovery of hidden patterns and relationships often goes intact, yet advanced data mining techniques can be helpful as remedy to this scenario. This thesis mainly deals with Intelligent Prediction of Chronic Renal Disease (IPCRD). Data covers blood, urine test, and external symptoms applied to predict chronic renal disease. Data from the database is initially transformed to Weka (3.6) and Chi-Square method is used for features section. After normalizing data, three classifiers were applied and efficiency of output is evaluated. Mainly, three classifiers are analyzed: Decision Tree, Naïve Bayes, K-Nearest Neighbour algorithm. Results show that each technique has its unique strength in realizing the objectives of the defined mining goals. Efficiency of Decision Tree and KNN was almost same but Naïve Bayes proved a comparative edge over others. Further sensitivity and specificity tests are used as statistical measures to examine the performance of a binary classification. Sensitivity (also called recall rate in some fields) measures the proportion of actual positives which are correctly identified while Specificity measures the proportion of negatives which are correctly identified. CRISP-DM methodology is applied to build the mining models. It consists of six major phases: business understanding, data understanding, data preparation, modeling, evaluation, and deployment.
Resumo:
Pós-graduação em Agronomia (Proteção de Plantas) - FCA
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Searches are presented for heavy scalar (H) and pseudoscalar (A) Higgs bosons posited in the two doublet model (2HDM) extensions of the standard model (SM). These searches are based on a data sample of pp collisions collected with the CMS experiment at the LHC at a center-of-mass energy of root s = 8 TeV and corresponding to an integrated luminosity of 19.5 fb(-1). The decays H -> hh and A -> Zh, where h denotes an SM-like Higgs boson, lead to events with three or more isolated charged leptons or with a photon pair accompanied by one or more isolated leptons. The search results are presented in terms of the H and A production cross sections times branching fractions and are further interpreted in terms of 2HDM parameters. We place 95% C.L. cross section upper limits of approximately 7 pb on sigma B for H -> hh and 2 pb for A -> Zh. Also presented are the results of a search for the rare decay of the top quark that results in a charm quark and an SM Higgs boson, t -> ch, the existence of which would indicate a nonzero flavor-changing Yukawa coupling of the top quark to the Higgs boson. We place a 95% C.L. upper limit of 0.56% on B(t -> ch).
Resumo:
Abstract Background Smear negative pulmonary tuberculosis (SNPT) accounts for 30% of pulmonary tuberculosis cases reported yearly in Brazil. This study aimed to develop a prediction model for SNPT for outpatients in areas with scarce resources. Methods The study enrolled 551 patients with clinical-radiological suspicion of SNPT, in Rio de Janeiro, Brazil. The original data was divided into two equivalent samples for generation and validation of the prediction models. Symptoms, physical signs and chest X-rays were used for constructing logistic regression and classification and regression tree models. From the logistic regression, we generated a clinical and radiological prediction score. The area under the receiver operator characteristic curve, sensitivity, and specificity were used to evaluate the model's performance in both generation and validation samples. Results It was possible to generate predictive models for SNPT with sensitivity ranging from 64% to 71% and specificity ranging from 58% to 76%. Conclusion The results suggest that those models might be useful as screening tools for estimating the risk of SNPT, optimizing the utilization of more expensive tests, and avoiding costs of unnecessary anti-tuberculosis treatment. Those models might be cost-effective tools in a health care network with hierarchical distribution of scarce resources.
Resumo:
Nella tesi sono trattate due famiglie di modelli meccanico statistici su vari grafi: i modelli di spin ferromagnetici (o di Ising) e i modelli di monomero-dimero. Il primo capitolo è dedicato principalmente allo studio del lavoro di Dembo e Montanari, in cui viene risolto il modello di Ising su grafi aleatori. Nel secondo capitolo vengono studiati i modelli di monomero-dimero, a partire dal lavoro di Heilemann e Lieb,con l'intento di dare contributi nuovi alla teoria. I principali temi trattati sono disuguaglianze di correlazione, soluzioni esatte su alcuni grafi ad albero e sul grafo completo, la concentrazione dell'energia libera intorno al proprio valor medio sul grafo aleatorio diluito di Erdös-Rényi.
Resumo:
This voluminous book which draws on almost 1000 references provides an important theoretical base for practice. After an informative introduction about models, maps and metaphors, Forte provides an impressive presentation of several perspectives for use in practice; applied ecological theory, applied system theory, applied biology, applied cognitive science, applied psychodynamic theory, applied behaviourism, applied symbolic interactionism, applied social role theory, applied economic theory, and applied critical theory. Finally he completes his book with a chapter on “Multi theory practice and routes to integration.”
Resumo:
The new computing paradigm known as cognitive computing attempts to imitate the human capabilities of learning, problem solving, and considering things in context. To do so, an application (a cognitive system) must learn from its environment (e.g., by interacting with various interfaces). These interfaces can run the gamut from sensors to humans to databases. Accessing data through such interfaces allows the system to conduct cognitive tasks that can support humans in decision-making or problem-solving processes. Cognitive systems can be integrated into various domains (e.g., medicine or insurance). For example, a cognitive system in cities can collect data, can learn from various data sources and can then attempt to connect these sources to provide real time optimizations of subsystems within the city (e.g., the transportation system). In this study, we provide a methodology for integrating a cognitive system that allows data to be verbalized, making the causalities and hypotheses generated from the cognitive system more understandable to humans. We abstract a city subsystem—passenger flow for a taxi company—by applying fuzzy cognitive maps (FCMs). FCMs can be used as a mathematical tool for modeling complex systems built by directed graphs with concepts (e.g., policies, events, and/or domains) as nodes and causalities as edges. As a verbalization technique we introduce the restriction-centered theory of reasoning (RCT). RCT addresses the imprecision inherent in language by introducing restrictions. Using this underlying combinatorial design, our approach can handle large data sets from complex systems and make the output understandable to humans.