946 resultados para temporal visualization techniques
Resumo:
Research analysis of electrocardiograms (ECG) today is carried out mostly using time depending signals of different leads shown in the graphs. Definition of ECG parameters is performed by qualified personnel, and requiring particular skills. To support decoding the cardiac depolarization phase of ECG there are methods to analyze space-time convolution charts in three dimensions where the heartbeat is described by the trajectory of its electrical vector. Based on this, it can be assumed that all available options of the classical ECG analysis of this time segment can be obtained using this technique. Investigated ECG visualization techniques in three dimensions combined with quantitative methods giving additional features of cardiac depolarization and allow a better exploitation of the information content of the given ECG signals.
Resumo:
Today, the data available to tackle many scientific challenges is vast in quantity and diverse in nature. The exploration of heterogeneous information spaces requires suitable mining algorithms as well as effective visual interfaces. Most existing systems concentrate either on mining algorithms or on visualization techniques. Though visual methods developed in information visualization have been helpful, for improved understanding of a complex large high-dimensional dataset, there is a need for an effective projection of such a dataset onto a lower-dimension (2D or 3D) manifold. This paper introduces a flexible visual data mining framework which combines advanced projection algorithms developed in the machine learning domain and visual techniques developed in the information visualization domain. The framework follows Shneiderman’s mantra to provide an effective user interface. The advantage of such an interface is that the user is directly involved in the data mining process. We integrate principled projection methods, such as Generative Topographic Mapping (GTM) and Hierarchical GTM (HGTM), with powerful visual techniques, such as magnification factors, directional curvatures, parallel coordinates, billboarding, and user interaction facilities, to provide an integrated visual data mining framework. Results on a real life high-dimensional dataset from the chemoinformatics domain are also reported and discussed. Projection results of GTM are analytically compared with the projection results from other traditional projection methods, and it is also shown that the HGTM algorithm provides additional value for large datasets. The computational complexity of these algorithms is discussed to demonstrate their suitability for the visual data mining framework.
Resumo:
We present a new program tool for interactive 3D visualization of some fundamental algorithms for representation and manipulation of Bézier curves. The program tool has an option for demonstration of one of their most important applications - in graphic design for creating letters by means of cubic Bézier curves. We use Java applet and JOGL as our main visualization techniques. This choice ensures the platform independency of the created applet and contributes to the realistic 3D visualization. The applet provides basic knowledge on the Bézier curves and is appropriate for illustrative and educational purposes. Experimental results are included.
Resumo:
Due to the rapid advances in computing and sensing technologies, enormous amounts of data are being generated everyday in various applications. The integration of data mining and data visualization has been widely used to analyze these massive and complex data sets to discover hidden patterns. For both data mining and visualization to be effective, it is important to include the visualization techniques in the mining process and to generate the discovered patterns for a more comprehensive visual view. In this dissertation, four related problems: dimensionality reduction for visualizing high dimensional datasets, visualization-based clustering evaluation, interactive document mining, and multiple clusterings exploration are studied to explore the integration of data mining and data visualization. In particular, we 1) propose an efficient feature selection method (reliefF + mRMR) for preprocessing high dimensional datasets; 2) present DClusterE to integrate cluster validation with user interaction and provide rich visualization tools for users to examine document clustering results from multiple perspectives; 3) design two interactive document summarization systems to involve users efforts and generate customized summaries from 2D sentence layouts; and 4) propose a new framework which organizes the different input clusterings into a hierarchical tree structure and allows for interactive exploration of multiple clustering solutions.
Resumo:
IN the last two decades, the instantaneous structure of a turbulent boundary layer has been examined by many in an effort to understand the dynamics of the flow. Distinct and well-defined flow patterns that seem to have great relevance to the turbulence production mechanism have been observed in the wall region.1'2 The flow near the wall is intermittent with periodic eruptions of the fluid, a phenomenon generally termed "bursting process." Earlier investigations in this field were limited to liquid flows at low speeds and the entire flowpattern was observed using flow visualization techniques.Study was later extended to boundary-layer flows in windtunnels at higher speeds and Reynolds numbers using hot-wiresignals for the analysis of the bursting phenomenon.
Resumo:
A key trait of Free and Open Source Software (FOSS) development is its distributed nature. Nevertheless, two project-level operations, the fork and the merge of program code, are among the least well understood events in the lifespan of a FOSS project. Some projects have explicitly adopted these operations as the primary means of concurrent development. In this study, we examine the effect of highly distributed software development, is found in the Linux kernel project, on collection and modelling of software development data. We find that distributed development calls for sophisticated temporal modelling techniques where several versions of the source code tree can exist at once. Attention must be turned towards the methods of quality assurance and peer review that projects employ to manage these parallel source trees. Our analysis indicates that two new metrics, fork rate and merge rate, could be useful for determining the role of distributed version control systems in FOSS projects. The study presents a preliminary data set consisting of version control and mailing list data.
Resumo:
A key trait of Free and Open Source Software (FOSS) development is its distributed nature. Nevertheless, two project-level operations, the fork and the merge of program code, are among the least well understood events in the lifespan of a FOSS project. Some projects have explicitly adopted these operations as the primary means of concurrent development. In this study, we examine the effect of highly distributed software development, is found in the Linux kernel project, on collection and modelling of software development data. We find that distributed development calls for sophisticated temporal modelling techniques where several versions of the source code tree can exist at once. Attention must be turned towards the methods of quality assurance and peer review that projects employ to manage these parallel source trees. Our analysis indicates that two new metrics, fork rate and merge rate, could be useful for determining the role of distributed version control systems in FOSS projects. The study presents a preliminary data set consisting of version control and mailing list data.
Resumo:
As academic libraries are increasingly supported by a matrix of databases functions, the use of data mining and visualization techniques offer significant potential for future collection development and service initiatives based on quantifiable data. While data collection techniques are still not standardized and results may be skewed because of granularity problems, faulty algorithms, and a host of other factors, useful baseline data is extractable and broad trends can be identified. The purpose of the current study is to provide an initial assessment of data associated with science monograph collection at the Marston Science Library (MSL), University of Florida. These sciences fall within the major Library of Congress Classification schedules of Q, S, and T, excluding R, TN, TR, and TT. Overall strategy of this project is to look at the potential science audiences within the university community and analyze data related to purchasing and circulation patterns, e-book usage, and interlibrary loan statistics. While a longitudinal study from 2004 to the present would be ideal, this paper presents the results from the academic year July 1, 2008 to June 30, 2009 which was chosen as the pilot period because all data reservoirs identified above were available.
Resumo:
Em uma grande gama de problemas físicos, governados por equações diferenciais, muitas vezes é de interesse obter-se soluções para o regime transiente e, portanto, deve-se empregar técnicas de integração temporal. Uma primeira possibilidade seria a de aplicar-se métodos explícitos, devido à sua simplicidade e eficiência computacional. Entretanto, esses métodos frequentemente são somente condicionalmente estáveis e estão sujeitos a severas restrições na escolha do passo no tempo. Para problemas advectivos, governados por equações hiperbólicas, esta restrição é conhecida como a condição de Courant-Friedrichs-Lewy (CFL). Quando temse a necessidade de obter soluções numéricas para grandes períodos de tempo, ou quando o custo computacional a cada passo é elevado, esta condição torna-se um empecilho. A fim de contornar esta restrição, métodos implícitos, que são geralmente incondicionalmente estáveis, são utilizados. Neste trabalho, foram aplicadas algumas formulações implícitas para a integração temporal no método Smoothed Particle Hydrodynamics (SPH) de modo a possibilitar o uso de maiores incrementos de tempo e uma forte estabilidade no processo de marcha temporal. Devido ao alto custo computacional exigido pela busca das partículas a cada passo no tempo, esta implementação só será viável se forem aplicados algoritmos eficientes para o tipo de estrutura matricial considerada, tais como os métodos do subespaço de Krylov. Portanto, fez-se um estudo para a escolha apropriada dos métodos que mais se adequavam a este problema, sendo os escolhidos os métodos Bi-Conjugate Gradient (BiCG), o Bi-Conjugate Gradient Stabilized (BiCGSTAB) e o Quasi-Minimal Residual (QMR). Alguns problemas testes foram utilizados a fim de validar as soluções numéricas obtidas com a versão implícita do método SPH.
Resumo:
Simultaneous non-invasive visualization of blood vessels and nerves in patients can be obtained in the eye. The retinal vasculature is a target of many retinopathies. Inflammation, readily manifest by leukocyte adhesion to the endothelial lining, is a key pathophysiological mechanism of many retinopathies, making it a valuable and ubiquitous target for disease research. Leukocyte fluorography has been extensively used in the past twenty years; however, fluorescent markers, visualization techniques, and recording methods have differed between studies. The lack of detailed protocol papers regarding leukocyte fluorography, coupled with lack of uniformity between studies, has led to a paucity of standards for leukocyte transit (velocity, adherence, extravasation) in the retina. Here, we give a detailed description of a convenient method using acridine orange (AO) and a commercially available scanning laser ophthalmoscope (SLO, HRA-OCT Spectralis) to view leukocyte behavior in the mouse retina. Normal mice are compared to mice with acute and chronic inflammation. This method can be readily adopted in many research labs.
Resumo:
Cone-capillary nozzles with varying cone angles from 10° to 120° and a capillary diameter of 120μ are experimentally investigated for their application in the hydroentanglement process. Cone-up and cone-down configurations in a range of water pressures of 30-120 bar are tested. The effects of the cone angle on flow parameters such as discharge and velocity coefficients and intact length are studied. Flow visualization techniques are used to recognize the flow regimes and characteristics and to inspect and compare the intact length and appearance of the jets. Cone-down nozzles with more consistent flow properties, lower discharges, and higher velocity coefficients are more suitable for the hydroentanglement process. Single-cone nozzles without capillaries and with varying cone angles are also tested. The flow properties of the jets from the single-cone nozzles are compared with the cone-capillary nozzles of the same cone angle to study the effect of the capillary section. The effect of the interaction of adjacent nozzles on the flow from multi-hole nozzles is studied, and the characteristics of the jets from the multi-hole nozzles are compared with the single-hole nozzles.
Resumo:
Advances in technology have produced more and more intricate industrial systems, such as nuclear power plants, chemical centers and petroleum platforms. Such complex plants exhibit multiple interactions among smaller units and human operators, rising potentially disastrous failure, which can propagate across subsystem boundaries. This paper analyzes industrial accident data-series in the perspective of statistical physics and dynamical systems. Global data is collected from the Emergency Events Database (EM-DAT) during the time period from year 1903 up to 2012. The statistical distributions of the number of fatalities caused by industrial accidents reveal Power Law (PL) behavior. We analyze the evolution of the PL parameters over time and observe a remarkable increment in the PL exponent during the last years. PL behavior allows prediction by extrapolation over a wide range of scales. In a complementary line of thought, we compare the data using appropriate indices and use different visualization techniques to correlate and to extract relationships among industrial accident events. This study contributes to better understand the complexity of modern industrial accidents and their ruling principles.
Resumo:
Les logiciels sont de plus en plus complexes et leur développement est souvent fait par des équipes dispersées et changeantes. Par ailleurs, de nos jours, la majorité des logiciels sont recyclés au lieu d’être développés à partir de zéro. La tâche de compréhension, inhérente aux tâches de maintenance, consiste à analyser plusieurs dimensions du logiciel en parallèle. La dimension temps intervient à deux niveaux dans le logiciel : il change durant son évolution et durant son exécution. Ces changements prennent un sens particulier quand ils sont analysés avec d’autres dimensions du logiciel. L’analyse de données multidimensionnelles est un problème difficile à résoudre. Cependant, certaines méthodes permettent de contourner cette difficulté. Ainsi, les approches semi-automatiques, comme la visualisation du logiciel, permettent à l’usager d’intervenir durant l’analyse pour explorer et guider la recherche d’informations. Dans une première étape de la thèse, nous appliquons des techniques de visualisation pour mieux comprendre la dynamique des logiciels pendant l’évolution et l’exécution. Les changements dans le temps sont représentés par des heat maps. Ainsi, nous utilisons la même représentation graphique pour visualiser les changements pendant l’évolution et ceux pendant l’exécution. Une autre catégorie d’approches, qui permettent de comprendre certains aspects dynamiques du logiciel, concerne l’utilisation d’heuristiques. Dans une seconde étape de la thèse, nous nous intéressons à l’identification des phases pendant l’évolution ou pendant l’exécution en utilisant la même approche. Dans ce contexte, la prémisse est qu’il existe une cohérence inhérente dans les évènements, qui permet d’isoler des sous-ensembles comme des phases. Cette hypothèse de cohérence est ensuite définie spécifiquement pour les évènements de changements de code (évolution) ou de changements d’état (exécution). L’objectif de la thèse est d’étudier l’unification de ces deux dimensions du temps que sont l’évolution et l’exécution. Ceci s’inscrit dans notre volonté de rapprocher les deux domaines de recherche qui s’intéressent à une même catégorie de problèmes, mais selon deux perspectives différentes.
Resumo:
A conceptual information system consists of a database together with conceptual hierarchies. The management system TOSCANA visualizes arbitrary combinations of conceptual hierarchies by nested line diagrams and allows an on-line interaction with a database to analyze data conceptually. The paper describes the conception of conceptual information systems and discusses the use of their visualization techniques for on-line analytical processing (OLAP).
Resumo:
In this paper, we describe an interdisciplinary project in which visualization techniques were developed for and applied to scholarly work from literary studies. The aim was to bring Christof Schöch's electronic edition of Bérardier de Bataut's Essai sur le récit (1776) to the web. This edition is based on the Text Encoding Initiative's XML-based encoding scheme (TEI P5, subset TEI-Lite). This now de facto standard applies to machine-readable texts used chiefly in the humanities and social sciences. The intention of this edition is to make the edited text freely available on the web, to allow for alternative text views (here original and modern/corrected text), to ensure reader-friendly annotation and navigation, to permit on-line collaboration in encoding and annotation as well as user comments, all in an open source, generically usable, lightweight package. These aims were attained by relying on a GPL-based, public domain CMS (Drupal) and combining it with XSL-Stylesheets and Java Script.