926 resultados para visual data analysis
Resumo:
El propósito de este estudio es evaluar la sensibilidad, especificidad y valores predictivos del Cuestionario Anamnésico de Síntomas de Miembro Superior y Columna (CASMSC) desarrollado por la Unidad de Investigación de Ergonomía de Postura y Movimiento (EPM). Se realizó un estudio descriptivo de tipo correlacional, mediante el análisis de datos secundarios de una base de datos con registros de trabajadores de la industria de alimentos (n=401) en el año 2013, a quienes se les había aplicado el CASMSC, así como una evaluación clínica fisioterapéutica enfocada en los mismos segmentos corporales; esta última utilizada como prueba de oro. Para analizar si existían diferencias estadísticas por edad, antigüedad y género se aplicó el análisis de varianza de una vía. La sensibilidad, especificidad y valores predictivos del CASMSC se informan con sus respectivos intervalos de confianza (95%). La prevalencia de umbral positivo para sospecha de Desorden Músculo Esquelético (DME) tanto de miembro superior como de columna se encontró muy por encima de la media nacional para el sector. La sensibilidad del CASMSC para miembro superior estuvo en el rango de un 80% a 94,57% y para columna cervical y lumbar fue de 36,4% y 43,4%, respectivamente. Para la región dorsal fue casi del doble de las otras dos regiones (85,7%). El CASMSC es recomendable en su apartado para miembro superior dado a su alto nivel de sensibilidad.
Resumo:
La nostra investigació s'inscriu en la concepció dinàmica de la intel·ligència, i concretament en el processos que configuren el processament cerebral en el Model d'integració de la informació descrit per Das, Kirby i Jarman (1979). Els dos processos cerebrals que constitueixen la base de la conducta intel·ligent són el processament simultani i el processament seqüencial; són les dues estratègies principals del processament de la informació. Tota classe d'estímul és susceptible d'ésser processat o bé seqüencialment (seriació, verbal, anàlisi), o be simultàniament (global, visual, síntesi). Basant-nos en el recull bibliogràfic i amb la convicció de que apropant-nos al coneixement de les peculiaritats del processament de la informació, ens endinsem en la comprensió del procés que mena a la conducta intel·ligent, i per tant, a l'aprenentatge, formulem la següent hipòtesi de treball: en els nens de preescolar (d'entre els 3 i els sis anys) es donaran aquest dos tipus de processament i variaran en funció de l'edat, el sexe, l'atenció, les dificultats d'aprenentatge, els problemes de llenguatge, el bilingüisme, el nivell sociocultural, la dominància manual, el nivell mental i de la presència de patologia. Les diferències que s'esdevinguin ens permetran de formular criteris i pautes per a la intervenció educativa. Els nostres objectius es refonen en mesurar el processament en nens de preescolar de les comarques gironines, verificar la relació de cada tipus de processament amb les variables esmentades, comprovar si s'estableix un paral·lelisme entre el processament i les aportacions de concepció localitzacionista de les funcions cerebrals en base als nostres resultats, i pautes per a la intervenció pedagògica. Quant al mètode, hem seleccionat una mostra representativa dels nens i nenes matriculats a les escoles publiques de les comarques gironines durant el curs 92/93, mitjançant un mostreig aleatori estratificat i per conglomerats. El tamany real de la mostra és de dos-cents seixanta un subjectes. Els instruments emprats han estat els següents: el Test K-ABC de Kaufman & Kaufman (1983) per a la avaluació del processament; un formulari dirigit als pares per a la recollida de la informació pertinent; entrevistes amb les mestres, i el Test de la Figura Humana de Goodenough. Pel que fa referència als resultats de la nostra recerca i en funció dels objectius proposats, constatem els fets següents. En els nens de preescolar, amb edats d'entre els tres i els sis anys, es constata l'existència dels dos tipus de processament cerebral, sense que es doni un predomini d'un sobre de l'altre; ambdós processaments actuen interrelacionadament. Ambdós tipus de processament milloren a mesura que augmenta l'edat, però es constaten diferències derivades del nivell mental: amb un nivell mental normal s'hi associa una millora d'ambdós processaments, mentre que amb un nivell mental deficient només millora fonamentalment el processament seqüencial. Tanmateix, el processament simultani està més relacionat amb les funcions cognitives complexes i és més nivell mental dependent que el processament seqüencial. Tant les dificultats d'aprenentatge com els problemes de llenguatge predominen en els nens i nenes amb un desequilibri significatiu entre ambdós tipus de processament; les dificultats d'aprenentatge estan més relacionades amb una deficiència del processament simultani, mentre que els problemes de llenguatge es relacionen més amb una deficiència en el processament seqüencial. Els nivells socioculturals baixos es relacionen amb resultats inferiors en ambdós tipus de processament. Per altra part, entre els nens bilingües és més freqüent el processament seqüencial significatiu. El test de la Figura Humana es comporta com un marcador de processament simultani i el nivell atencional com un marcador de la gravetat del problema que afecta al processament i en el següent ordre: nivell mental deficient, dificultats, d'aprenentatge i problemes de llenguatge . Les deficiències atencionals van lligades a deficiències en el processament simultani i a la presencia de patologia. Quant a la dominància manual no es constaten diferències en el processament. Finalment, respecte del sexe només podem aportar que quan un dels dos tipus de processament és deficitari,i es dóna per tant, un desequilibri en el processament, predomina significativament el nombre de nens afectats per sobre del de nenes.
Resumo:
Eye tracking has become a preponderant technique in the evaluation of user interaction and behaviour with study objects in defined contexts. Common eye tracking related data representation techniques offer valuable input regarding user interaction and eye gaze behaviour, namely through fixations and saccades measurement. However, these and other techniques may be insufficient for the representation of acquired data in specific studies, namely because of the complexity of the study object being analysed. This paper intends to contribute with a summary of data representation and information visualization techniques used in data analysis within different contexts (advertising, websites, television news and video games). Additionally, several methodological approaches are presented in this paper, which resulted from several studies developed and under development at CETAC.MEDIA - Communication Sciences and Technologies Research Centre. In the studies described, traditional data representation techniques were insufficient. As a result, new approaches were necessary and therefore, new forms of representing data, based on common techniques were developed with the objective of improving communication and information strategies. In each of these studies, a brief summary of the contribution to their respective area will be presented, as well as the data representation techniques used and some of the acquired results.
Resumo:
Virtual globe technology holds many exciting possibilities for environmental science. These easy-to-use, intuitive systems provide means for simultaneously visualizing four-dimensional environmental data from many different sources, enabling the generation of new hypotheses and driving greater understanding of the Earth system. Through the use of simple markup languages, scientists can publish and consume data in interoperable formats without the need for technical assistance. In this paper we give, with examples from our own work, a number of scientific uses for virtual globes, demonstrating their particular advantages. We explain how we have used Web Services to connect virtual globes with diverse data sources and enable more sophisticated usage such as data analysis and collaborative visualization. We also discuss the current limitations of the technology, with particular regard to the visualization of subsurface data and vertical sections.
Resumo:
While over-dispersion in capture–recapture studies is well known to lead to poor estimation of population size, current diagnostic tools to detect the presence of heterogeneity have not been specifically developed for capture–recapture studies. To address this, a simple and efficient method of testing for over-dispersion in zero-truncated count data is developed and evaluated. The proposed method generalizes an over-dispersion test previously suggested for un-truncated count data and may also be used for testing residual over-dispersion in zero-inflation data. Simulations suggest that the asymptotic distribution of the test statistic is standard normal and that this approximation is also reasonable for small sample sizes. The method is also shown to be more efficient than an existing test for over-dispersion adapted for the capture–recapture setting. Studies with zero-truncated and zero-inflated count data are used to illustrate the test procedures.
Resumo:
In survival analysis frailty is often used to model heterogeneity between individuals or correlation within clusters. Typically frailty is taken to be a continuous random effect, yielding a continuous mixture distribution for survival times. A Bayesian analysis of a correlated frailty model is discussed in the context of inverse Gaussian frailty. An MCMC approach is adopted and the deviance information criterion is used to compare models. As an illustration of the approach a bivariate data set of corneal graft survival times is analysed. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
A wireless sensor network (WSN) is a group of sensors linked by wireless medium to perform distributed sensing tasks. WSNs have attracted a wide interest from academia and industry alike due to their diversity of applications, including home automation, smart environment, and emergency services, in various buildings. The primary goal of a WSN is to collect data sensed by sensors. These data are characteristic of being heavily noisy, exhibiting temporal and spatial correlation. In order to extract useful information from such data, as this paper will demonstrate, people need to utilise various techniques to analyse the data. Data mining is a process in which a wide spectrum of data analysis methods is used. It is applied in the paper to analyse data collected from WSNs monitoring an indoor environment in a building. A case study is given to demonstrate how data mining can be used to optimise the use of the office space in a building.
Resumo:
Event-related functional magnetic resonance imaging (efMRI) has emerged as a powerful technique for detecting brains' responses to presented stimuli. A primary goal in efMRI data analysis is to estimate the Hemodynamic Response Function (HRF) and to locate activated regions in human brains when specific tasks are performed. This paper develops new methodologies that are important improvements not only to parametric but also to nonparametric estimation and hypothesis testing of the HRF. First, an effective and computationally fast scheme for estimating the error covariance matrix for efMRI is proposed. Second, methodologies for estimation and hypothesis testing of the HRF are developed. Simulations support the effectiveness of our proposed methods. When applied to an efMRI dataset from an emotional control study, our method reveals more meaningful findings than the popular methods offered by AFNI and FSL. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The ability to display and inspect powder diffraction data quickly and efficiently is a central part of the data analysis process. Whilst many computer programs are capable of displaying powder data, their focus is typically on advanced operations such as structure solution or Rietveld refinement. This article describes a lightweight software package, Jpowder, whose focus is fast and convenient visualization and comparison of powder data sets in a variety of formats from computers with network access. Jpowder is written in Java and uses its associated Web Start technology to allow ‘single-click deployment’ from a web page, http://www.jpowder.org. Jpowder is open source, free and available for use by anyone.
Resumo:
The organization of non-crystalline polymeric materials at a local level, namely on a spatial scale between a few and 100 a, is still unclear in many respects. The determination of the local structure in terms of the configuration and conformation of the polymer chain and of the packing characteristics of the chain in the bulk material represents a challenging problem. Data from wide-angle diffraction experiments are very difficult to interpret due to the very large amount of information that they carry, that is the large number of correlations present in the diffraction patterns.We describe new approaches that permit a detailed analysis of the complex neutron diffraction patterns characterizing polymer melts and glasses. The coupling of different computer modelling strategies with neutron scattering data over a wide Q range allows the extraction of detailed quantitative information on the structural arrangements of the materials of interest. Proceeding from modelling routes as diverse as force field calculations, single-chain modelling and reverse Monte Carlo, we show the successes and pitfalls of each approach in describing model systems, which illustrate the need to attack the data analysis problem simultaneously from several fronts.
Resumo:
Background: Microarray based comparative genomic hybridisation (CGH) experiments have been used to study numerous biological problems including understanding genome plasticity in pathogenic bacteria. Typically such experiments produce large data sets that are difficult for biologists to handle. Although there are some programmes available for interpretation of bacterial transcriptomics data and CGH microarray data for looking at genetic stability in oncogenes, there are none specifically to understand the mosaic nature of bacterial genomes. Consequently a bottle neck still persists in accurate processing and mathematical analysis of these data. To address this shortfall we have produced a simple and robust CGH microarray data analysis process that may be automated in the future to understand bacterial genomic diversity. Results: The process involves five steps: cleaning, normalisation, estimating gene presence and absence or divergence, validation, and analysis of data from test against three reference strains simultaneously. Each stage of the process is described and we have compared a number of methods available for characterising bacterial genomic diversity, for calculating the cut-off between gene presence and absence or divergence, and shown that a simple dynamic approach using a kernel density estimator performed better than both established, as well as a more sophisticated mixture modelling technique. We have also shown that current methods commonly used for CGH microarray analysis in tumour and cancer cell lines are not appropriate for analysing our data. Conclusion: After carrying out the analysis and validation for three sequenced Escherichia coli strains, CGH microarray data from 19 E. coli O157 pathogenic test strains were used to demonstrate the benefits of applying this simple and robust process to CGH microarray studies using bacterial genomes.
Resumo:
In the recent years, the area of data mining has been experiencing considerable demand for technologies that extract knowledge from large and complex data sources. There has been substantial commercial interest as well as active research in the area that aim to develop new and improved approaches for extracting information, relationships, and patterns from large datasets. Artificial neural networks (NNs) are popular biologically-inspired intelligent methodologies, whose classification, prediction, and pattern recognition capabilities have been utilized successfully in many areas, including science, engineering, medicine, business, banking, telecommunication, and many other fields. This paper highlights from a data mining perspective the implementation of NN, using supervised and unsupervised learning, for pattern recognition, classification, prediction, and cluster analysis, and focuses the discussion on their usage in bioinformatics and financial data analysis tasks. © 2012 Wiley Periodicals, Inc.
Resumo:
Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.
Resumo:
The purpose of this lecture is to review recent development in data analysis, initialization and data assimilation. The development of 3-dimensional multivariate schemes has been very timely because of its suitability to handle the many different types of observations during FGGE. Great progress has taken place in the initialization of global models by the aid of non-linear normal mode technique. However, in spite of great progress, several fundamental problems are still unsatisfactorily solved. Of particular importance is the question of the initialization of the divergent wind fields in the Tropics and to find proper ways to initialize weather systems driven by non-adiabatic processes. The unsatisfactory ways in which such processes are being initialized are leading to excessively long spin-up times.