819 resultados para Graphical representations
Resumo:
Web service-based application is an architectural style, where a collection of Web services communicate to each other to execute processes. With the popularity increase of Web service-based applications and since messages exchanged inside of this applications can be complex, we need tools to simplify the understanding of interrelationship among Web services. This work present a description of a graphical representation of Web service-based applications and the mechanisms inserted among Web service requesters and providers to catch information to represent an application. The major contribution of this paper is to discus and use HTTP and SOAP information to show a graphical representation similar to a UML sequence diagram of Web service-based applications.
Resumo:
In this article, graphical representations of DNA primary sequences were generated. Topological indices and molecular connectivity indices were calculated and used for the comparison of similarities among eight different DNA segments. The satisfactory results were achieved by this analysis.
Resumo:
Virtual reality has a number of advantages for analyzing sports interactions such as the standardization of experimental conditions, stereoscopic vision, and complete control of animated humanoid movement. Nevertheless, in order to be useful for sports applications, accurate perception of simulated movement in the virtual sports environment is essential. This perception depends on parameters of the synthetic character such as the number of degrees of freedom of its skeleton or the levels of detail (LOD) of its graphical representation. This study focuses on the influence of this latter parameter on the perception of the movement. In order to evaluate it, this study analyzes the judgments of immersed handball goalkeepers that play against a graphically modified virtual thrower. Five graphical representations of the throwing action were defined: a textured reference level (L0), a nontextured level (L1), a wire-frame level (L2), a moving point light display (MLD) level with a normal-sized ball (L3), and a MLD level where the ball is represented by a point of light (L4). The results show that judgments made by goalkeepers in the L4 condition are significantly less accurate than in all the other conditions (p
Resumo:
Competent navigation in an environment is a major requirement for an autonomous mobile robot to accomplish its mission. Nowadays, many successful systems for navigating a mobile robot use an internal map which represents the environment in a detailed geometric manner. However, building, maintaining and using such environment maps for navigation is difficult because of perceptual aliasing and measurement noise. Moreover, geometric maps require the processing of huge amounts of data which is computationally expensive. This thesis addresses the problem of vision-based topological mapping and localisation for mobile robot navigation. Topological maps are concise and graphical representations of environments that are scalable and amenable to symbolic manipulation. Thus, they are well-suited for basic robot navigation applications, and also provide a representational basis for the procedural and semantic information needed for higher-level robotic tasks. In order to make vision-based topological navigation suitable for inexpensive mobile robots for the mass market we propose to characterise key places of the environment based on their visual appearance through colour histograms. The approach for representing places using visual appearance is based on the fact that colour histograms change slowly as the field of vision sweeps the scene when a robot moves through an environment. Hence, a place represents a region of the environment rather than a single position. We demonstrate in experiments using an indoor data set, that a topological map in which places are characterised using visual appearance augmented with metric clues provides sufficient information to perform continuous metric localisation which is robust to the kidnapped robot problem. Many topological mapping methods build a topological map by clustering visual observations to places. However, due to perceptual aliasing observations from different places may be mapped to the same place representative in the topological map. A main contribution of this thesis is a novel approach for dealing with the perceptual aliasing problem in topological mapping. We propose to incorporate neighbourhood relations for disambiguating places which otherwise are indistinguishable. We present a constraint based stochastic local search method which integrates the approach for place disambiguation in order to induce a topological map. Experiments show that the proposed method is capable of mapping environments with a high degree of perceptual aliasing, and that a small map is found quickly. Moreover, the method of using neighbourhood information for place disambiguation is integrated into a framework for topological off-line simultaneous localisation and mapping which does not require an initial categorisation of visual observations. Experiments on an indoor data set demonstrate the suitability of our method to reliably localise the robot while building a topological map.
Resumo:
The ability to decode graphics is an increasingly important component of mathematics assessment and curricula. This study examined 50, 9- to 10-year-old students (23 male, 27 female), as they solved items from six distinct graphical languages (e.g., maps) that are commonly used to convey mathematical information. The results of the study revealed: 1) factors which contribute to success or hinder performance on tasks with various graphical representations; and 2) how the literacy and graphical demands of tasks influence the mathematical sense making of students. The outcomes of this study highlight the changing nature of assessment in school mathematics and identify the function and influence of graphics in the design of assessment tasks.
Exploring variation in measurement as a foundation for statistical thinking in the elementary school
Resumo:
This study was based on the premise that variation is the foundation of statistics and statistical investigations. The study followed the development of fourth-grade students' understanding of variation through participation in a sequence of two lessons based on measurement. In the first lesson all students measured the arm span of one student, revealing pathways students follow in developing understanding of variation and linear measurement (related to research question 1). In the second lesson each student's arm span was measured once, introducing a different aspect of variation for students to observe and contrast. From this second lesson, students' development of the ability to compare their representations for the two scenarios and explain differences in terms of variation was explored (research question 2). Students' documentation, in both workbook and software formats, enabled us to monitor their engagement and identify their increasing appreciation of the need to observe, represent, and contrast the variation in the data. Following the lessons, a written student assessment was used for judging retention of understanding of variation developed through the lessons and the degree of transfer of understanding to a different scenario (research question 3).
Resumo:
In recent years, considerable research efforts have been directed to micro-array technologies and their role in providing simultaneous information on expression profiles for thousands of genes. These data, when subjected to clustering and classification procedures, can assist in identifying patterns and providing insight on biological processes. To understand the properties of complex gene expression datasets, graphical representations can be used. Intuitively, the data can be represented in terms of a bipartite graph, with weighted edges corresponding to gene-sample node couples in the dataset. Biologically meaningful subgraphs can be sought, but performance can be influenced both by the search algorithm, and, by the graph-weighting scheme and both merit rigorous investigation. In this paper, we focus on edge-weighting schemes for bipartite graphical representation of gene expression. Two novel methods are presented: the first is based on empirical evidence; the second on a geometric distribution. The schemes are compared for several real datasets, assessing efficiency of performance based on four essential properties: robustness to noise and missing values, discrimination, parameter influence on scheme efficiency and reusability. Recommendations and limitations are briefly discussed. Keywords: Edge-weighting; weighted graphs; gene expression; bi-clustering
Resumo:
Sensor networks can be naturally represented as graphical models, where the edge set encodes the presence of sparsity in the correlation structure between sensors. Such graphical representations can be valuable for information mining purposes as well as for optimizing bandwidth and battery usage with minimal loss of estimation accuracy. We use a computationally efficient technique for estimating sparse graphical models which fits a sparse linear regression locally at each node of the graph via the Lasso estimator. Using a recently suggested online, temporally adaptive implementation of the Lasso, we propose an algorithm for streaming graphical model selection over sensor networks. With battery consumption minimization applications in mind, we use this algorithm as the basis of an adaptive querying scheme. We discuss implementation issues in the context of environmental monitoring using sensor networks, where the objective is short-term forecasting of local wind direction. The algorithm is tested against real UK weather data and conclusions are drawn about certain tradeoffs inherent in decentralized sensor networks data analysis. © 2010 The Author. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved.
Resumo:
This paper presents a new algorithm for learning the structure of a special type of Bayesian network. The conditional phase-type (C-Ph) distribution is a Bayesian network that models the probabilistic causal relationships between a skewed continuous variable, modelled by the Coxian phase-type distribution, a special type of Markov model, and a set of interacting discrete variables. The algorithm takes a dataset as input and produces the structure, parameters and graphical representations of the fit of the C-Ph distribution as output.The algorithm, which uses a greedy-search technique and has been implemented in MATLAB, is evaluated using a simulated data set consisting of 20,000 cases. The results show that the original C-Ph distribution is recaptured and the fit of the network to the data is discussed.
Resumo:
As últimas décadas do séc. XX assistiram a um crescente protagonismo do Design de Informação que, desde então, tem sofrido inúmeras formas e designações, num processo de afirmação e auto descoberta. A proliferação de dados disponíveis deu ao design e em particular a este ramo do desenho para a compreensão, uma visibilidade crescente e a responsabilidade de encontrar, a partir da informação, novos meios para a construção de sentido. Do design à engenharia informática, são várias as disciplinas que convergem hoje nesse desígnio ainda que sob diferentes modelos e ferramentas. Esta convergência promove uma comparação entre modelos, que tendem a ser tanto mais valorizados quanto mais objectivas as representações. De Playfair a Bertin, as representações gráficas dos últimos duzentos anos têm-se situado no âmbito de disciplinas como a Economia, Sociologia ou a Gestão, explorando metáforas funcionais com vista à evidência da tradução numérica. O Design, enquanto mediador cultural e através do Desenho, tende a acrescentar ao mesmo exercício uma dimensão narrativa ou ilustrativa, convocando a própria existência do autor na interpretação dos mesmos dados numéricos. Com esta investigação, novos processos de semiose se oferecem, associando à objectividade dos dados quantitativos, a subjectividade da cultura formulada a partir do indivíduo enquanto intérprete. Na procura do conhecimento, reconhece-se assim que o desenho da informação ganha competências pela mediação da experiência, religando ética, técnica e estética.
Resumo:
[français] L’objectif de cette recherche est d’analyser l’organisation interne d’une firme de sondage sous l’angle des sphères du milieu de travail telles qu’identifiées par Bélanger, Giles et Murray (2004), soient : la gestion de la production, l’organisation du travail et la relation à l’emploi. Plus spécifiquement, nous chercherons à savoir comment se comporte la firme enquêtée face à la gestion de la flexibilité organisationnelle et quel est l’impact de celle-ci sur les trois sphères du travail. L’analyse utilise la méthodologie de l’étude de cas et fait appel à divers types de matériaux : des observations ponctuelles, des entrevues informelles et les bases de données administratives ainsi que les rapports d’évaluation des entrevues téléphoniques effectuées par les intervieweurs. De même, l’analyse des résultats utilise à la fois des méthodes plus classiques telles que les corrélations ainsi que des représentations graphiques et des analyses qualitatives. L’analyse permet de repérer une logique de fonctionnement à l’œuvre dans les différentes sphères de l’emploi : l’importante standardisation des processus de travail (dans le champ de la gestion de la production), la réduction des marges de manœuvre (dans le champ de l’organisation du travail) et la non reconnaissance de l’expertise des intervieweurs (dans le champ de la relation à l’emploi). Les contradictions repérées dans l’analyse, entre les sphères de l’emploi et les objectifs de flexibilité, montrent que les structures mises en place bloquent, dans une certaine mesure, la capacité d’initiative et d’adaptation que la flexibilité exige. La recherche a montré que ce qu’on demande aux intervieweurs est à la fois le reflet des exigences de la flexibilité, tel que constaté dans ce mémoire, mais aussi, des exigences sociales face à la méthodologie de sondage. Tout porte à déduire que celles-ci peuvent engendrer un plafonnement de la performance des employés. Mots-clés : centres d’appels, intervieweurs, firmes de sondage, flexibilité organisationnelle, gestion de la production, organisation du travail, relation à l’emploi, travail émotionnel.
Resumo:
Dans une perspective d’analyse des risques pour la santé publique, l’estimation de l’exposition revêt une importance capitale. Parmi les approches existantes d’estimation de l’exposition, l’utilisation d’outils, tels que des questionnaires alimentaires, la modélisation toxicocinétique ou les reconstructions de doses, en complément de la surveillance biologique, permet de raffiner les estimations, et ainsi, de mieux caractériser les risques pour la santé. Ces différents outils et approches ont été développés et appliqués à deux substances d’intérêt, le méthylmercure et le sélénium en raison des effets toxiques bien connus du méthylmercure, de l’interaction entre le méthylmercure et le sélénium réduisant potentiellement ces effets toxiques, et de l’existence de sources communes via la consommation de poisson. Ainsi, l’objectif général de cette thèse consistait à produire des données cinétiques et comparatives manquantes pour la validation et l’interprétation d’approches et d’outils d’évaluation de l’exposition au méthylmercure et au sélénium. Pour ce faire, l’influence du choix de la méthode d’évaluation de l’exposition au méthylmercure a été déterminée en comparant les apports quotidiens et les risques pour la santé estimés par différentes approches (évaluation directe de l’exposition par la surveillance biologique combinée à la modélisation toxicocinétique ou évaluation indirecte par questionnaire alimentaire). D’importantes différences entre ces deux approches ont été observées : les apports quotidiens de méthylmercure estimés par questionnaires sont en moyenne six fois plus élevés que ceux estimés à l’aide de surveillance biologique et modélisation. Ces deux méthodes conduisent à une appréciation des risques pour la santé divergente puisqu’avec l’approche indirecte, les doses quotidiennes estimées de méthylmercure dépassent les normes de Santé Canada pour 21 des 23 volontaires, alors qu’avec l’approche directe, seulement 2 des 23 volontaires sont susceptibles de dépasser les normes. Ces différences pourraient être dues, entre autres, à des biais de mémoire et de désirabilité lors de la complétion des questionnaires. En outre, l’étude de la distribution du sélénium dans différentes matrices biologiques suite à une exposition non alimentaire (shampoing à forte teneur en sélénium) visait, d’une part, à étudier la cinétique du sélénium provenant de cette source d’exposition et, d’autre part, à évaluer la contribution de cette source à la charge corporelle totale. Un suivi des concentrations biologiques (sang, urine, cheveux et ongles) pendant une période de 18 mois chez des volontaires exposés à une source non alimentaire de sélénium a contribué à mieux expliciter les mécanismes de transfert du sélénium du site d’absorption vers le sang (concomitance des voies régulées et non régulées). Ceci a permis de montrer que, contrairement au méthylmercure, l’utilisation des cheveux comme biomarqueur peut mener à une surestimation importante de la charge corporelle réelle en sélénium en cas de non contrôle de facteurs confondants tels que l’utilisation de shampoing contenant du sélénium. Finalement, une analyse exhaustive des données de surveillance biologique du sélénium issues de 75 études publiées dans la littérature a permis de mieux comprendre la cinétique globale du sélénium dans l’organisme humain. En particulier, elle a permis le développement d’un outil reliant les apports quotidiens et les concentrations biologiques de sélénium dans les différentes matrices à l’aide d’algorithmes mathématiques. Conséquemment, à l’aide de ces données cinétiques exprimées par un système d’équations logarithmiques et de leur représentation graphique, il est possible d’estimer les apports quotidiens chez un individu à partir de divers prélèvements biologiques, et ainsi, de faciliter la comparaison d’études de surveillance biologique du sélénium utilisant des biomarqueurs différents. L’ensemble de ces résultats de recherche montre que la méthode choisie pour évaluer l’exposition a un impact important sur les estimations des risques associés. De plus, les recherches menées ont permis de mettre en évidence que le sélénium non alimentaire ne contribue pas de façon significative à la charge corporelle totale, mais constitue un facteur de confusion pour l’estimation de la charge corporelle réelle en sélénium. Finalement, la détermination des équations et des coefficients reliant les concentrations de sélénium entre différentes matrices biologiques, à l’aide d’une vaste base de données cinétiques, concourt à mieux interpréter les résultats de surveillance biologique.
Resumo:
The aim of this paper is to indicate how TOSCANA may be extended to allow graphical representations not only of concept lattices but also of concept graphs in the sense of Contextual Logic. The contextual-logic extension of TOSCANA requires the logical scaling of conceptual and relatioal scales for which we propose the Peircean Algebraic Logic as reconstructed by R. W. Burch. As graphical representations we recommend, besides labelled line diagrams of concept lattices and Sowa's diagrams of conceptual graphs, particular information maps for utilizing background knowledge as much as possible. Our considerations are illustrated by a small information system about the domestic flights in Austria.
Resumo:
The Hardy-Weinberg law, formulated about 100 years ago, states that under certain assumptions, the three genotypes AA, AB and BB at a bi-allelic locus are expected to occur in the proportions p2, 2pq, and q2 respectively, where p is the allele frequency of A, and q = 1-p. There are many statistical tests being used to check whether empirical marker data obeys the Hardy-Weinberg principle. Among these are the classical xi-square test (with or without continuity correction), the likelihood ratio test, Fisher's Exact test, and exact tests in combination with Monte Carlo and Markov Chain algorithms. Tests for Hardy-Weinberg equilibrium (HWE) are numerical in nature, requiring the computation of a test statistic and a p-value. There is however, ample space for the use of graphics in HWE tests, in particular for the ternary plot. Nowadays, many genetical studies are using genetical markers known as Single Nucleotide Polymorphisms (SNPs). SNP data comes in the form of counts, but from the counts one typically computes genotype frequencies and allele frequencies. These frequencies satisfy the unit-sum constraint, and their analysis therefore falls within the realm of compositional data analysis (Aitchison, 1986). SNPs are usually bi-allelic, which implies that the genotype frequencies can be adequately represented in a ternary plot. Compositions that are in exact HWE describe a parabola in the ternary plot. Compositions for which HWE cannot be rejected in a statistical test are typically “close" to the parabola, whereas compositions that differ significantly from HWE are “far". By rewriting the statistics used to test for HWE in terms of heterozygote frequencies, acceptance regions for HWE can be obtained that can be depicted in the ternary plot. This way, compositions can be tested for HWE purely on the basis of their position in the ternary plot (Graffelman & Morales, 2008). This leads to nice graphical representations where large numbers of SNPs can be tested for HWE in a single graph. Several examples of graphical tests for HWE (implemented in R software), will be shown, using SNP data from different human populations
Resumo:
Interactive visual representations complement traditional statistical and machine learning techniques for data analysis, allowing users to play a more active role in a knowledge discovery process and making the whole process more understandable. Though visual representations are applicable to several stages of the knowledge discovery process, a common use of visualization is in the initial stages to explore and organize a sometimes unknown and complex data set. In this context, the integrated and coordinated - that is, user actions should be capable of affecting multiple visualizations when desired - use of multiple graphical representations allows data to be observed from several perspectives and offers richer information than isolated representations. In this paper we propose an underlying model for an extensible and adaptable environment that allows independently developed visualization components to be gradually integrated into a user configured knowledge discovery application. Because a major requirement when using multiple visual techniques is the ability to link amongst them, so that user actions executed on a representation propagate to others if desired, the model also allows runtime configuration of coordinated user actions over different visual representations. We illustrate how this environment is being used to assist data exploration and organization in a climate classification problem.