686 resultados para Impala, Hadoop, Big Data, HDFS, Social Business Intelligence, SBI, cloudera


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las comunidades colaborativas, donde grandes cantidades de personas colaboran para la producción de recursos compartidos (e.g. Github, Wikipedia, OpenStreetMap, Arduino, StackOverflow) están extendiéndose progresivamente a multitud de campos. No obstante, es complicado comprender cómo funcionan y evolucionan. ¿Qué tipos de usuarios son más activos en Wikia? ¿Cómo ha evolucionado el número de wikis activas en los últimos años? ¿Qué perfil de actividad presentan la mayor parte de colaboradores de Wikia? ¿Son más activos los hombres o las mujeres en la Wikipedia? En los proyectos de Github, ¿el esfuerzo de programación (y frecuencia de commits) se distribuye de forma homogénea a lo largo del tiempo o suele estar concentrado? Estas comunidades, típicamente online, dejan registrada su actividad en grandes bases de datos, muchas de ellas disponibles públicamente. Sin embargo, el ciudadano de a pie no tiene ni las herramientas ni el conocimiento necesario para sacar conclusiones de esos datos. En este TFG desarrollamos una herramienta de análisis exploratorio y visualización de datos de la plataforma Wikia, sitio web colaborativo que permite la creación, edición y modificación del contenido y estructura de miles de páginas web de tipo enciclopedia basadas en la tecnología wiki. Nuestro objetivo es que esta aplicación web sea usable por cualquiera y que no requiera que el usuario sea un experto en Big Data para poder visualizar las gráficas de evolución o distribuciones del comportamiento interno de la comunidad, pudiendo modificar algunos de sus parámetros y visualizando cómo cambian. Como resultado de este trabajo se ha desarrollado una primera versión de la aplicación disponible en GitHub1 y en http://chartsup.esy.es/

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ACKNOWLEDGEMENTS This research is based upon work supported in part by the U.S. ARL and U.K. Ministry of Defense under Agreement Number W911NF-06-3-0001, and by the NSF under award CNS-1213140. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views or represent the official policies of the NSF, the U.S. ARL, the U.S. Government, the U.K. Ministry of Defense or the U.K. Government. The U.S. and U.K. Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ACKNOWLEDGEMENTS This research is based upon work supported in part by the U.S. ARL and U.K. Ministry of Defense under Agreement Number W911NF-06-3-0001, and by the NSF under award CNS-1213140. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views or represent the official policies of the NSF, the U.S. ARL, the U.S. Government, the U.K. Ministry of Defense or the U.K. Government. The U.S. and U.K. Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'activité physique améliore la santé, mais seulement 4.8% des Canadiens atteignent le niveau recommandé. La position socio-économique est un des déterminants de l'activité physique les plus importants. Elle est associée à l’activité physique de manière transversale à l’adolescence et à l’âge adulte. Cette thèse a tenté de déterminer s'il y a une association à long terme entre la position socio-économique au début du parcours de vie et l’activité physique à l’âge adulte. S'il y en avait une, un deuxième objectif était de déterminer quel modèle théorique en épidémiologie des parcours de vie décrivait le mieux sa forme. Cette thèse comprend trois articles: une recension systématique et deux recherches originales. Dans la recension systématique, des recherches ont été faites dans Medline et EMBASE pour trouver les études ayant mesuré la position socio-économique avant l'âge de 18 ans et l'activité physique à ≥18 ans. Dans les deux recherches originales, la modélisation par équations structurelles a été utilisée pour comparer trois modèles alternatifs en épidémiologie des parcours de vie: le modèle d’accumulation de risque avec effets additifs, le modèle d’accumulation de risque avec effet déclenché et le modèle de période critique. Ces modèles ont été comparés dans deux cohortes prospectives représentatives à l'échelle nationale: la 1970 British birth cohort (n=16,571; première recherche) et l’Enquête longitudinale nationale sur les enfants et les jeunes (n=16,903; deuxième recherche). Dans la recension systématique, 10 619 articles ont été passés en revue par deux chercheurs indépendants et 42 ont été retenus. Pour le résultat «activité physique» (tous types et mesures confondus), une association significative avec la position socio-économique durant l’enfance fut trouvée dans 26/42 études (61,9%). Quand seulement l’activité physique durant les loisirs a été considérée, une association significative fut trouvée dans 21/31 études (67,7%). Dans un sous-échantillon de 21 études ayant une méthodologie plus forte, les proportions d’études ayant trouvé une association furent plus hautes : 15/21 (71,4%) pour tous les types et toutes les mesures d’activité physique et 12/15 (80%) pour l’activité physique de loisir seulement. Dans notre première recherche originale sur les données de la British birth cohort, pour la classe sociale, nous avons trouvé que le modèle d’accumulation de risque avec effets additifs s’est ajusté le mieux chez les hommes et les femmes pour l’activité physique de loisir, au travail et durant les transports. Dans notre deuxième recherche originale sur les données canadiennes sur l'activité physique de loisir, nous avons trouvé que chez les hommes, le modèle de période critique s’est ajusté le mieux aux données pour le niveau d’éducation et le revenu, alors que chez les femmes, le modèle d’accumulation de risque avec effets additifs s’est ajusté le mieux pour le revenu, tandis que le niveau d’éducation ne s’est ajusté à aucun des modèles testés. En conclusion, notre recension systématique indique que la position socio-économique au début du parcours de vie est associée à la pratique d'activité physique à l'âge adulte. Les résultats de nos deux recherches originales suggèrent un patron d’associations le mieux représenté par le modèle d’accumulation de risque avec effets additifs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Perturbação de Hiperatividade e Défice de Atenção (PHDA) é distúrbio do comportamento que afeta entre 5 a 7% das crianças a nível global. De forma a combater os sintomas da PHDA, muitas são sujeitas a medicação e a terapia comportamental. Este trabalho insere-se no Projeto TherapyForAll®© que se foca no desenvolvimento de serious games jogos que poderão complementar o tratamento da PHDA pela via comportamental. É necessário recolher uma quantidade muito significativa de dados durante a execução dos jogos de forma a retirar conclusões sobre a evolução dos sintomas. O âmbito da dissertação recai sobre o tratamento dos dados obtidos das sessões e a sua apresentação ao profissional de saúde responsável para que ele possa rapidamente perceber o estado da criança, a sua evolução e com isso dar o apoio necessário.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.

While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.

For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This symposium describes a multi-dimensional strategy to examine fidelity of implementation in an authentic school district context. An existing large-district peer mentoring program provides an example. The presentation will address development of a logic model to articulate a theory of change; collaborative creation of a data set aligned with essential concepts and research questions; identification of independent, dependent, and covariate variables; issues related to use of big data that include conditioning and transformation of data prior to analysis; operationalization of a strategy to capture fidelity of implementation data from all stakeholders; and ways in which fidelity indicators might be used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to become better prepared to support Research Data Management (RDM) practices in sciences and engineering, Queen’s University Library, together with the University Research Services, conducted a research study of all ranks of faculty members, as well as postdoctoral fellows and graduate students at the Faculty of Engineering & Applied Science, Departments of Chemistry, Computer Science, Geological Sciences and Geological Engineering, Mathematics and Statistics, Physics, Engineering Physics & Astronomy, School of Environmental Studies, and Geography & Planning in the Faculty of Arts and Science.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El Periodismo de Datos se ha convertido en una de las tendencias que se están implantando en los medios. En pocos años el desarrollo y visibilidad de esta modalidad ha aumentado considerablemente y son numerosos los medios que cuentan con equipos y espacios específicos de Periodismo de Datos en el panorama internacional. Del mismo modo, existen aplicaciones, plataformas, webs, o fundaciones al margen de las empresas periodísticas cuya labor también puede ser enmarcada en este ámbito. El objetivo principal de esta contribución es establecer una radiografía de la implantación del Periodismo de Datos en España; tanto dentro como fuera de los medios. Aunque se trata de una disciplina todavía en fase de desarrollo, parece adecuado realizar un estudio exploratorio que ofrezca una panorámica de su situación actual en España.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose of this paper:
Recent literature indicates that around one third of perishable products finish as waste (Mena et al., 2014): 60% of this waste can be classified as avoidable (EC, 2010) suggesting logistics and operational inefficiencies along the supply chain. In developed countries perishable products are predominantly wasted in wholesale and retail (Gustavsson et al., 2011) due to customer demand uncertainty the errors and delays in the supply chain (Fernie and Sparks, 2014). While research on logistics of large retail supply chains is well documented, research on retail small and medium enterprises’ (SMEs) capabilities to prevent and manage waste of perishable products is in its infancy (c.f. Ellegaard, 2008) and needs further exploration. In our study, we investigate the retail logistics practice of small food retailers, the factors that contribute to perishable products waste and the barriers and opportunities of SMEs in retail logistics to preserve product quality and participate in reverse logistics flows.

Design/methodology/approach:
As research on waste of perishable products for SMEs is scattered, we focus on identifying key variables that contribute to the creation of avoidable waste. Secondly we identify patterns of waste creation at the retail level and its possibilities for value added recovery. We use explorative case studies (Eisenhardt, 1989) and compare four SMEs and one large retailer that operate in a developed market. To get insights into specificities of SMEs that affect retail logistics practice, we select two types of food retailers: specialised (e.g. greengrocers and bakers) and general (e.g. convenience store that sells perishable products as a part of the assortment)

Findings:
Our preliminary findings indicate that there is a difference between large retailers and SME retailers in factors that contribute to the waste creation, as well as opportunities for value added recovery of products. While more factors appear to affect waste creation and management at large retailers, a small number of specific factors appears to affect SMEs. Similarly, large retailers utilise a range of practices to reduce risks of product perishability and short shelf life, manage demand, and manage reverse logistics practices. Retail SMEs on the other hand have limited options to address waste creation and value added recovery. However, our findings show that specialist SMEs could successfully minimize waste and even create possibilities for value added recovery of perishable products. Data indicates that business orientation of the SME, the buyersupplier relationship, and an extent of adoption of lean principles in retail coupled with SME resources, product specific regulations and support from local authorities for waste management or partnerships with other organizations determine extent of successful preservation of a product quality and value added recovery.

Value:
Our contribution to the SCM academic literature is threefold: first, we identify major factors that contribute to the generation waste of perishable products in retail environment; second, we identify possibilities for value added recovery for perishable products and third, we present opportunities and challenges for SME retailers to manage or participate in activities of value added recovery. Our findings contribute to theory by filling a gap in the literature that considers product quality preservation and value added recovery in the context of retail logistics and SMEs.

Research limitations/implications:
Our findings are limited to insights from five case studies of retail companies that operate within a developed market. To improve on generalisability, we intend to increase the number of cases and include data obtained from the suppliers and organizations involved in reverse logistics flows (e.g. local authorities, charities, etc.).

Practical implications:
With this paper, we contribute to the improvement of retail logistics and operations in SMEs which constitute over 99% of business activities in UK (Rhodes, 2015). Our findings will help retail managers and owners to better understand the possibilities for value added recovery, investigate a range of logistics and retail strategies suitable for the specificities of SME environment and, ultimately, improve their profitability and sustainability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The astonishing development of diverse and different hardware platforms is twofold: on one side, the challenge for the exascale performance for big data processing and management; on the other side, the mobile and embedded devices for data collection and human machine interaction. This drove to a highly hierarchical evolution of programming models. GVirtuS is the general virtualization system developed in 2009 and firstly introduced in 2010 enabling a completely transparent layer among GPUs and VMs. This paper shows the latest achievements and developments of GVirtuS, now supporting CUDA 6.5, memory management and scheduling. Thanks to the new and improved remoting capabilities, GVirtus now enables GPU sharing among physical and virtual machines based on x86 and ARM CPUs on local workstations,computing clusters and distributed cloud appliances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundamentals of data science and introduction to COMP6235

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Perinteisten kilpailuetujen katoaminen ja kilpailun kiristyminen haastavat yrityksiä etsimään keinoja kilpailukyvyn säilyttämiseksi. Tietotekniikan nopea kehitys ja liiketoiminnassa syntyvän datan määrän kasvu luovat yrityksille mahdollisuuden hyödyntää analytiikkaa päätöksenteon tukena ja liiketoiminnan tehostamisessa. Työ on kirjallisuuskatsaus ja sen tavoitteena on selvittää analytiikkajärjestelmän käyttöönottoprojektin vaiheet, käyttöönottoon liittyvät kustannukset ja miten kustannuksia voidaan hallita. Lisäksi esitetään tiivis katsaus analytiikan kehitykseen ja nykytilaan sekä tarkastellaan hankintamalleja, hankkeiden taloudellista arviointia ja käyttöönottoprojektin kriittisiä menestystekijöitä. Käyttöönottoprojekti on monivaiheinen ja se alkaa liiketoiminnan analysoinnista sekä järjestelmän suunnittelusta ulottuen aina sen toteutukseen ja jälkiarviointiin. Käyttöönottoon liittyy useita kustannuseriä, joita voidaan luokitella niiden ominaisuuksien perusteella. Projektin kustannusten hallinnan prosesseja ovat kustannusten hallinnan suunnittelu, kustannusten arviointi, budjetin määrittäminen ja kustannusten valvonta, jotka limittyvät käyttöönoton vaiheiden kanssa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08