918 resultados para Spatial analysis statistics -- Data processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last years, the main orientation of Formal Concept Analysis (FCA) has turned from mathematics towards computer science. This article provides a review of this new orientation and analyzes why and how FCA and computer science attracted each other. It discusses FCA as a knowledge representation formalism using five knowledge representation principles provided by Davis, Shrobe, and Szolovits [DSS93]. It then studies how and why mathematics-based researchers got attracted by computer science. We will argue for continuing this trend by integrating the two research areas FCA and Ontology Engineering. The second part of the article discusses three lines of research which witness the new orientation of Formal Concept Analysis: FCA as a conceptual clustering technique and its application for supporting the merging of ontologies; the efficient computation of association rules and the structuring of the results; and the visualization and management of conceptual hierarchies and ontologies including its application in an email management system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we study two orthogonal extensions of the classical data mining problem of mining association rules, and show how they naturally interact. The first is the extension from a propositional representation to datalog, and the second is the condensed representation of frequent itemsets by means of Formal Concept Analysis (FCA). We combine the notion of frequent datalog queries with iceberg concept lattices (also called closed itemsets) of FCA and introduce two kinds of iceberg query lattices as condensed representations of frequent datalog queries. We demonstrate that iceberg query lattices provide a natural way to visualize relational association rules in a non-redundant way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the number of resources on the web exceeds by far the number of documents one can track, it becomes increasingly difficult to remain up to date on ones own areas of interest. The problem becomes more severe with the increasing fraction of multimedia data, from which it is difficult to extract some conceptual description of their contents. One way to overcome this problem are social bookmark tools, which are rapidly emerging on the web. In such systems, users are setting up lightweight conceptual structures called folksonomies, and overcome thus the knowledge acquisition bottleneck. As more and more people participate in the effort, the use of a common vocabulary becomes more and more stable. We present an approach for discovering topic-specific trends within folksonomies. It is based on a differential adaptation of the PageRank algorithm to the triadic hypergraph structure of a folksonomy. The approach allows for any kind of data, as it does not rely on the internal structure of the documents. In particular, this allows to consider different data types in the same analysis step. We run experiments on a large-scale real-world snapshot of a social bookmarking system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Social resource sharing systems like YouTube and del.icio.us have acquired a large number of users within the last few years. They provide rich resources for data analysis, information retrieval, and knowledge discovery applications. A first step towards this end is to gain better insights into content and structure of these systems. In this paper, we will analyse the main network characteristics of two of the systems. We consider their underlying data structures – socalled folksonomies – as tri-partite hypergraphs, and adapt classical network measures like characteristic path length and clustering coefficient to them. Subsequently, we introduce a network of tag co-occurrence and investigate some of its statistical properties, focusing on correlations in node connectivity and pointing out features that reflect emergent semantics within the folksonomy. We show that simple statistical indicators unambiguously spot non-social behavior such as spam.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Social resource sharing systems like YouTube and del.icio.us have acquired a large number of users within the last few years. They provide rich resources for data analysis, information retrieval, and knowledge discovery applications. A first step towards this end is to gain better insights into content and structure of these systems. In this paper, we will analyse the main network characteristics of two of these systems. We consider their underlying data structures – so-called folksonomies – as tri-partite hypergraphs, and adapt classical network measures like characteristic path length and clustering coefficient to them. Subsequently, we introduce a network of tag cooccurrence and investigate some of its statistical properties, focusing on correlations in node connectivity and pointing out features that reflect emergent semantics within the folksonomy. We show that simple statistical indicators unambiguously spot non-social behavior such as spam.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ongoing growth of the World Wide Web, catalyzed by the increasing possibility of ubiquitous access via a variety of devices, continues to strengthen its role as our prevalent information and commmunication medium. However, although tools like search engines facilitate retrieval, the task of finally making sense of Web content is still often left to human interpretation. The vision of supporting both humans and machines in such knowledge-based activities led to the development of different systems which allow to structure Web resources by metadata annotations. Interestingly, two major approaches which gained a considerable amount of attention are addressing the problem from nearly opposite directions: On the one hand, the idea of the Semantic Web suggests to formalize the knowledge within a particular domain by means of the "top-down" approach of defining ontologies. On the other hand, Social Annotation Systems as part of the so-called Web 2.0 movement implement a "bottom-up" style of categorization using arbitrary keywords. Experience as well as research in the characteristics of both systems has shown that their strengths and weaknesses seem to be inverse: While Social Annotation suffers from problems like, e. g., ambiguity or lack or precision, ontologies were especially designed to eliminate those. On the contrary, the latter suffer from a knowledge acquisition bottleneck, which is successfully overcome by the large user populations of Social Annotation Systems. Instead of being regarded as competing paradigms, the obvious potential synergies from a combination of both motivated approaches to "bridge the gap" between them. These were fostered by the evidence of emergent semantics, i. e., the self-organized evolution of implicit conceptual structures, within Social Annotation data. While several techniques to exploit the emergent patterns were proposed, a systematic analysis - especially regarding paradigms from the field of ontology learning - is still largely missing. This also includes a deeper understanding of the circumstances which affect the evolution processes. This work aims to address this gap by providing an in-depth study of methods and influencing factors to capture emergent semantics from Social Annotation Systems. We focus hereby on the acquisition of lexical semantics from the underlying networks of keywords, users and resources. Structured along different ontology learning tasks, we use a methodology of semantic grounding to characterize and evaluate the semantic relations captured by different methods. In all cases, our studies are based on datasets from several Social Annotation Systems. Specifically, we first analyze semantic relatedness among keywords, and identify measures which detect different notions of relatedness. These constitute the input of concept learning algorithms, which focus then on the discovery of synonymous and ambiguous keywords. Hereby, we assess the usefulness of various clustering techniques. As a prerequisite to induce hierarchical relationships, our next step is to study measures which quantify the level of generality of a particular keyword. We find that comparatively simple measures can approximate the generality information encoded in reference taxonomies. These insights are used to inform the final task, namely the creation of concept hierarchies. For this purpose, generality-based algorithms exhibit advantages compared to clustering approaches. In order to complement the identification of suitable methods to capture semantic structures, we analyze as a next step several factors which influence their emergence. Empirical evidence is provided that the amount of available data plays a crucial role for determining keyword meanings. From a different perspective, we examine pragmatic aspects by considering different annotation patterns among users. Based on a broad distinction between "categorizers" and "describers", we find that the latter produce more accurate results. This suggests a causal link between pragmatic and semantic aspects of keyword annotation. As a special kind of usage pattern, we then have a look at system abuse and spam. While observing a mixed picture, we suggest that an individual decision should be taken instead of disregarding spammers as a matter of principle. Finally, we discuss a set of applications which operationalize the results of our studies for enhancing both Social Annotation and semantic systems. These comprise on the one hand tools which foster the emergence of semantics, and on the one hand applications which exploit the socially induced relations to improve, e. g., searching, browsing, or user profiling facilities. In summary, the contributions of this work highlight viable methods and crucial aspects for designing enhanced knowledge-based services of a Social Semantic Web.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The possibility to develop automatically running models which can capture some of the most important factors driving the urban climate would be very useful for many planning aspects. With the help of these modulated climate data, the creation of the typically used “Urban Climate Maps” (UCM) will be accelerated and facilitated. This work describes the development of a special ArcGIS software extension, along with two support databases to achieve this functionality. At the present time, lacking comparability between different UCMs and imprecise planning advices going along with the significant technical problems of manually creating conventional maps are central issues. Also inflexibility and static behaviour are reducing the maps’ practicality. From experi-ence, planning processes are formed more productively, namely to implant new planning parameters directly via the existing work surface to map the impact of the data change immediately, if pos-sible. In addition to the direct climate figures, information of other planning areas (like regional characteristics / developments etc.) have to be taken into account to create the UCM as well. Taking all these requirements into consideration, an automated calculation process of urban climate impact parameters will serve to increase the creation of homogenous UCMs efficiently.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pounamu (NZ jade), or nephrite, is a protected mineral in its natural form following the transfer of ownership back to Ngai Tahu under the Ngai Tahu (Pounamu Vesting) Act 1997. Any theft of nephrite is prosecutable under the Crimes Act 1961. Scientific evidence is essential in cases where origin is disputed. A robust method for discrimination of this material through the use of elemental analysis and compositional data analysis is required. Initial studies have characterised the variability within a given nephrite source. This has included investigation of both in situ outcrops and alluvial material. Methods for the discrimination of two geographically close nephrite sources are being developed. Key Words: forensic, jade, nephrite, laser ablation, inductively coupled plasma mass spectrometry, multivariate analysis, elemental analysis, compositional data analysis

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Notes on use of SPSS. Used in Research Skills for Biomedical Science

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study followed a design of observational, cross-sectional descriptive. In a sample of 59 workers of the Congress of the Republic of Colombia, to which we applied a survey based on the Nordic questionnaire (1) and demographic information: age, gender, position and seniority; as well as height and weight to estimate body mass index (BMI). For the analysis of the information frequency, percentage and units of central tendency were used. The data processing was performed with the software Stat Info Vocational (2) and Excel version 2013.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En esta investigación se realiza un análisis del tema de las armas y algunos de los aspectos que se generan alrededor de estas. Se busca analizar y ver la efectividad de las iniciativas y políticas de desarme ciudadano que se han realizado en países diversos, así como a nivel nacional, con las cuales se busca sensibilizar a la población civil para restringir el porte de estos objetos que son utilizados para cometer homicidios. Primero, se exponen algunos conceptos a tener en cuenta para este documento, se realizará un contexto a nivel mundial sobre las cifras, datos relacionados con las armas de fuego, así como los argumentos que se encuentran a favor o en contra de las armas de fuego. Segundo, se analiza la legislación que en el ámbito internacional, regional y local existe y se aplica para las armas de fuego en relación con su comercialización, tenencia y porte. Tercero, se explican y analizan los programas y políticas de desarme ciudadano que se han implementado en la Argentina y Ciudad de México. Cuarto, se analizan las campañas de desarme ciudadano que se han realizado durante las últimas décadas en Bogotá, Medellín y Cali, donde se analizará la efectividad y el impacto de estas medidas para la reducción de los homicidios en las ciudades. Por último se analizan los elementos que tiene una política pública y, para este caso, la formulación de una política pública de desarme ciudadano en Colombia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introducción: El cáncer colorrectal es el tercer cáncer más diagnosticado en los hombres y el segundo en las mujeres a nivel mundial. Hasta 1.000 casos nuevos se diagnostican en Colombia cada año, por lo que es importante conocer la experiencia con esta patología en un centro de experiencia recientemente creado en el “Méderi, Hospital Universitario Mayor”. Materiales y métodos: Se realizó un estudio de corte transversal de la población con diagnóstico de cáncer colorrectal atendida entre agosto 2012 y diciembre 2014 que corresponde al tiempo de funcionamiento del servicio de Coloproctología. Resultados: Se atendieron un total de 152 pacientes con cáncer colorrectal en la institución. Se operó el 91% de los pacientes. El estadío más frecuente fue el IV. Solo el 4.9% presentó dehiscencia de anastomosis, datos concordantes con la literatura cuando el manejo es a cargo de expertos. El subtipo histológico más frecuente fue adenocarcinoma moderadamente diferenciado y la mortalidad perioperatoria de 2.63%. Discusión: El cáncer colorrectal es una entidad con alta morbimortalidad lo cual puede cambiar si se realizan pruebas de tamizaje, para realizar un manejo temprano y oportuno. Además juega un papel importante la experiencia del cirujano y la discusión de los pacientes en juntas multidisciplinarias. Palabras clave: cáncer de colon, cáncer de recto, epidemiología, estadificación

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objetivo: Establecer la correlación entre condiciones de iluminación, ángulo visual, discriminación de contrastes y agudeza visual en la aparición de síntomas visuales en operarios de computador. Materiales y métodos: Estudio de corte transversal y correlación en muestra de 136 trabajadores administrativos de un “call center” perteneciente a una entidad de salud en la ciudad de Bogotá, utilizando un cuestionario con el que se evaluaron las variables sociodemográficas y ocupacionales; aplicando la escala de síntomas visión – computador (CVSS17), realizando evaluación médica y midiendo iluminación y distancia operario pantalla de computador y con los datos recolectados se realizó un análisis estadístico bivariado y se estableció la correlación entre las condiciones de iluminación, ángulo visual, discriminación de contrataste y agudeza visual; frente a la aparición de síntomas visuales asociados con el uso del computador. El análisis se llevó a cabo con medidas de tendencia central y dispersión y con el coeficiente de correlación paramétrico de Pearson o no-paramétrico de Spearman, previamente se evaluó la normalidad con la prueba de Shapiro-Wilk. Las pruebas estadísticas se evaluarán a un nivel de significancia del 5% (p<0.05). Resultados: El promedio de edad de los participantes en el estudio fue de 36,3 años con un rango entre los 22 y 57 años y en donde el género predominante fue el femenino con el 79,4%. Se encontraron síntomas visuales asociados al uso de pantalla de computador del 59,6%, siendo los más frecuentes la epifora (70,6%), fotofobia (67,6%) y ardor ocular (54,4%). Se reportó una correlación inversa significativa entre niveles de iluminación y manifestación de fotofobia (p=0.02; r= 0,262). Por otra parte no se encontró correlación significativa entre los síntomas referidos con ángulo de visión y agudeza visual y discriminación de contrastes. Conclusión: Las condiciones laborales de iluminación del grupo de estudio están relacionadas con la manifestación de fotofobia, Se encontró asociación entre síntomas visuales y variables sociodemográficas, específicamente con el género, fotofobia a pantalla, fatiga visual y fotofobia