856 resultados para Secondary data analysis
Resumo:
RESUMEN El ausentismo laboral genera un gran impacto económico en las empresas y a la sociedad en general. Es un problema difícil de manejar ya que es multifactorial, porque a pesar de que en su gran mayoría es generado por enfermedad general, al analizarlo se puede encontrar otros factores que conlleven a la ausencia del trabajador y con ello producir alteración al normal funcionamiento de la empresa, por lo que resulta indispensable estudiar este tema. Objetivo Caracterizar las principales causas de ausentismo laboral en los médicos generales de una IPS que presta servicios de consulta externa de medicina general a nivel nacional durante el año 2014. Materiales y Métodos: es un estudio de corte transversal sobre datos secundarios correspondientes al registro de incapacidades que presento la IPS durante el año 2014. Los criterios de inclusión fueron los médicos generales con los que contaba la IPS que presta servicios de salud a nivel nacional durante el año 2014 y los criterios de exclusión fueron las licencias de maternidad y paternidad. El tamaño de la muestra final fue de 202 médicos y el número de incapacidades que se presentó durante el año 2014 fue 313. Se realizó análisis de distribución de frecuencias, porcentaje y prevalencia de las incapacidades. Resultados: durante el año 2014 se presentaron 313 incapacidades, en una población de 202 médicos generales con prevalencia en las mujeres. El diagnóstico más frecuente de las incapacidades fue la categoría diagnostica “otros” en el cual se encuentra migraña, vértigo, alteraciones de la mama con 59 incapacidades, seguida por enfermedades gastrointestinales con 25 incapacidades. Conclusiones y recomendaciones: Las incapacidades fueron más frecuentes en mujeres que en hombres. El diagnóstico de las incapacidades más frecuente fue “enfermedad genérica o ausencia de diagnóstico”. La incapacidad más frecuente de un día que se presentaron 46 registros. El médico que mayor número de incapacidades presento fue de 18 para el año 2014. Se recomienda a la empresa tener un seguimiento de las incapacidades repetitivas, ya que estas podrían tener relación con enfermedad laboral que aún no ha sido calificada. Se recomienda complementar la base de datos con información como el antecedente de enfermedad crónica y el sedentarismo, lo que puede permitir realizar nuevos estudios respecto al riesgo cardiovascular de esta población.
Resumo:
Resumen: Introducción: El ausentismo laboral por causa médica es un problema por la afectación que genera en el trabajador y en la empresa. Objetivo: Caracterizar el ausentismo laboral por causas médicas de una empresa de alimentos de Bogotá. Materiales y métodos: Estudio de corte transversal con datos secundarios de registros de incapacidades de los años 2013 y 2014. El procesamiento de la información se realizó con el programa SPSS, se obtuvieron medidas de tendencia central y de dispersión. Se determinó el número y la duración de incapacidades, la duración media de estas, el sistema afectado, se realizó el análisis de frecuencia por centro de costo y género. Resultados: Se registraron un total de 575 incapacidades, 387 fueron por enfermedad de origen común y 188 por accidentes de trabajo. Se perdieron 3.326 días por ausentismo, de los cuales en 45,09% se presentó en 2013 y el 54,91% restante en 2014, de estos 1985 se generaron en eventos de origen común y 1341 por accidentes de trabajo. La principal causa de incapacidades por enfermedades de origen común fueron patologías asociadas al sistema músculo esquelético, y para las originadas en accidentes de trabajo fueron las lesiones en manos. Conclusiones: para el año 2014 los accidentes de trabajo disminuyeron con respecto al año 2013 y el sistema más afectado respecto a enfermedad común fue el osteomuscular. Es conveniente que se implemente un sistema o programa de vigilancia y análisis en puestos de trabajo para identificar los factores de riesgo asociados y minimizar los riesgos.
Resumo:
Eye tracking has become a preponderant technique in the evaluation of user interaction and behaviour with study objects in defined contexts. Common eye tracking related data representation techniques offer valuable input regarding user interaction and eye gaze behaviour, namely through fixations and saccades measurement. However, these and other techniques may be insufficient for the representation of acquired data in specific studies, namely because of the complexity of the study object being analysed. This paper intends to contribute with a summary of data representation and information visualization techniques used in data analysis within different contexts (advertising, websites, television news and video games). Additionally, several methodological approaches are presented in this paper, which resulted from several studies developed and under development at CETAC.MEDIA - Communication Sciences and Technologies Research Centre. In the studies described, traditional data representation techniques were insufficient. As a result, new approaches were necessary and therefore, new forms of representing data, based on common techniques were developed with the objective of improving communication and information strategies. In each of these studies, a brief summary of the contribution to their respective area will be presented, as well as the data representation techniques used and some of the acquired results.
Resumo:
Virtual globe technology holds many exciting possibilities for environmental science. These easy-to-use, intuitive systems provide means for simultaneously visualizing four-dimensional environmental data from many different sources, enabling the generation of new hypotheses and driving greater understanding of the Earth system. Through the use of simple markup languages, scientists can publish and consume data in interoperable formats without the need for technical assistance. In this paper we give, with examples from our own work, a number of scientific uses for virtual globes, demonstrating their particular advantages. We explain how we have used Web Services to connect virtual globes with diverse data sources and enable more sophisticated usage such as data analysis and collaborative visualization. We also discuss the current limitations of the technology, with particular regard to the visualization of subsurface data and vertical sections.
Resumo:
This paper considers the potential contribution of secondary quantitative analyses of large scale surveys to the investigation of 'other' childhoods. Exploring other childhoods involves investigating the experience of young people who are unequally positioned in relation to multiple, embodied, identity locations, such as (dis)ability, 'class', gender, sexuality, ethnicity and race. Despite some possible advantages of utilising extensive databases, the paper outlines a number of methodological problems with existing surveys which tend to reinforce adultist and broader hierarchical social relations. It is contended that scholars of children's geographies could overcome some of these problematic aspects of secondary data sources by endeavouring to transform the research relations of large scale surveys. Such endeavours would present new theoretical, ethical and methodological complexities, which are briefly considered.
Resumo:
While over-dispersion in capture–recapture studies is well known to lead to poor estimation of population size, current diagnostic tools to detect the presence of heterogeneity have not been specifically developed for capture–recapture studies. To address this, a simple and efficient method of testing for over-dispersion in zero-truncated count data is developed and evaluated. The proposed method generalizes an over-dispersion test previously suggested for un-truncated count data and may also be used for testing residual over-dispersion in zero-inflation data. Simulations suggest that the asymptotic distribution of the test statistic is standard normal and that this approximation is also reasonable for small sample sizes. The method is also shown to be more efficient than an existing test for over-dispersion adapted for the capture–recapture setting. Studies with zero-truncated and zero-inflated count data are used to illustrate the test procedures.
Resumo:
In survival analysis frailty is often used to model heterogeneity between individuals or correlation within clusters. Typically frailty is taken to be a continuous random effect, yielding a continuous mixture distribution for survival times. A Bayesian analysis of a correlated frailty model is discussed in the context of inverse Gaussian frailty. An MCMC approach is adopted and the deviance information criterion is used to compare models. As an illustration of the approach a bivariate data set of corneal graft survival times is analysed. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
A wireless sensor network (WSN) is a group of sensors linked by wireless medium to perform distributed sensing tasks. WSNs have attracted a wide interest from academia and industry alike due to their diversity of applications, including home automation, smart environment, and emergency services, in various buildings. The primary goal of a WSN is to collect data sensed by sensors. These data are characteristic of being heavily noisy, exhibiting temporal and spatial correlation. In order to extract useful information from such data, as this paper will demonstrate, people need to utilise various techniques to analyse the data. Data mining is a process in which a wide spectrum of data analysis methods is used. It is applied in the paper to analyse data collected from WSNs monitoring an indoor environment in a building. A case study is given to demonstrate how data mining can be used to optimise the use of the office space in a building.
Resumo:
Event-related functional magnetic resonance imaging (efMRI) has emerged as a powerful technique for detecting brains' responses to presented stimuli. A primary goal in efMRI data analysis is to estimate the Hemodynamic Response Function (HRF) and to locate activated regions in human brains when specific tasks are performed. This paper develops new methodologies that are important improvements not only to parametric but also to nonparametric estimation and hypothesis testing of the HRF. First, an effective and computationally fast scheme for estimating the error covariance matrix for efMRI is proposed. Second, methodologies for estimation and hypothesis testing of the HRF are developed. Simulations support the effectiveness of our proposed methods. When applied to an efMRI dataset from an emotional control study, our method reveals more meaningful findings than the popular methods offered by AFNI and FSL. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The ability to display and inspect powder diffraction data quickly and efficiently is a central part of the data analysis process. Whilst many computer programs are capable of displaying powder data, their focus is typically on advanced operations such as structure solution or Rietveld refinement. This article describes a lightweight software package, Jpowder, whose focus is fast and convenient visualization and comparison of powder data sets in a variety of formats from computers with network access. Jpowder is written in Java and uses its associated Web Start technology to allow ‘single-click deployment’ from a web page, http://www.jpowder.org. Jpowder is open source, free and available for use by anyone.
Resumo:
The organization of non-crystalline polymeric materials at a local level, namely on a spatial scale between a few and 100 a, is still unclear in many respects. The determination of the local structure in terms of the configuration and conformation of the polymer chain and of the packing characteristics of the chain in the bulk material represents a challenging problem. Data from wide-angle diffraction experiments are very difficult to interpret due to the very large amount of information that they carry, that is the large number of correlations present in the diffraction patterns.We describe new approaches that permit a detailed analysis of the complex neutron diffraction patterns characterizing polymer melts and glasses. The coupling of different computer modelling strategies with neutron scattering data over a wide Q range allows the extraction of detailed quantitative information on the structural arrangements of the materials of interest. Proceeding from modelling routes as diverse as force field calculations, single-chain modelling and reverse Monte Carlo, we show the successes and pitfalls of each approach in describing model systems, which illustrate the need to attack the data analysis problem simultaneously from several fronts.
Resumo:
Background: Microarray based comparative genomic hybridisation (CGH) experiments have been used to study numerous biological problems including understanding genome plasticity in pathogenic bacteria. Typically such experiments produce large data sets that are difficult for biologists to handle. Although there are some programmes available for interpretation of bacterial transcriptomics data and CGH microarray data for looking at genetic stability in oncogenes, there are none specifically to understand the mosaic nature of bacterial genomes. Consequently a bottle neck still persists in accurate processing and mathematical analysis of these data. To address this shortfall we have produced a simple and robust CGH microarray data analysis process that may be automated in the future to understand bacterial genomic diversity. Results: The process involves five steps: cleaning, normalisation, estimating gene presence and absence or divergence, validation, and analysis of data from test against three reference strains simultaneously. Each stage of the process is described and we have compared a number of methods available for characterising bacterial genomic diversity, for calculating the cut-off between gene presence and absence or divergence, and shown that a simple dynamic approach using a kernel density estimator performed better than both established, as well as a more sophisticated mixture modelling technique. We have also shown that current methods commonly used for CGH microarray analysis in tumour and cancer cell lines are not appropriate for analysing our data. Conclusion: After carrying out the analysis and validation for three sequenced Escherichia coli strains, CGH microarray data from 19 E. coli O157 pathogenic test strains were used to demonstrate the benefits of applying this simple and robust process to CGH microarray studies using bacterial genomes.
Resumo:
In the recent years, the area of data mining has been experiencing considerable demand for technologies that extract knowledge from large and complex data sources. There has been substantial commercial interest as well as active research in the area that aim to develop new and improved approaches for extracting information, relationships, and patterns from large datasets. Artificial neural networks (NNs) are popular biologically-inspired intelligent methodologies, whose classification, prediction, and pattern recognition capabilities have been utilized successfully in many areas, including science, engineering, medicine, business, banking, telecommunication, and many other fields. This paper highlights from a data mining perspective the implementation of NN, using supervised and unsupervised learning, for pattern recognition, classification, prediction, and cluster analysis, and focuses the discussion on their usage in bioinformatics and financial data analysis tasks. © 2012 Wiley Periodicals, Inc.
Resumo:
Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.