965 resultados para Data Analytics
Resumo:
Learning Analytics is an emerging field focused on analyzing learners’ interactions with educational content. One of the key open issues in learning analytics is the standardization of the data collected. This is a particularly challenging issue in serious games, which generate a diverse range of data. This paper reviews the current state of learning analytics, data standards and serious games, studying how serious games are tracking the interactions from their players and the metrics that can be distilled from them. Based on this review, we propose an interaction model that establishes a basis for applying Learning Analytics into serious games. This paper then analyzes the current standards and specifications used in the field. Finally, it presents an implementation of the model with one of the most promising specifications: Experience API (xAPI). The Experience API relies on Communities of Practice developing profiles that cover different use cases in specific domains. This paper presents the Serious Games xAPI Profile: a profile developed to align with the most common use cases in the serious games domain. The profile is applied to a case study (a demo game), which explores the technical practicalities of standardizing data acquisition in serious games. In summary, the paper presents a new interaction model to track serious games and their implementation with the xAPI specification.
Resumo:
Video games have become one of the largest entertainment industries, and their power to capture the attention of players worldwide soon prompted the idea of using games to improve education. However, these educational games, commonly referred to as serious games, face different challenges when brought into the classroom, ranging from pragmatic issues (e.g. a high development cost) to deeper educational issues, including a lack of understanding of how the students interact with the games and how the learning process actually occurs. This chapter explores the potential of data-driven approaches to improve the practical applicability of serious games. Existing work done by the entertainment and learning industries helps to build a conceptual model of the tasks required to analyze player interactions in serious games (gaming learning analytics or GLA). The chapter also describes the main ongoing initiatives to create reference GLA infrastructures and their connection to new emerging specifications from the educational technology field. Finally, it explores how this data-driven GLA will help in the development of a new generation of more effective educational games and new business models that will support their expansion. This results in additional ethical implications, which are discussed at the end of the chapter.
Resumo:
The generation of heterogeneous big data sources with ever increasing volumes, velocities and veracities over the he last few years has inspired the data science and research community to address the challenge of extracting knowledge form big data. Such a wealth of generated data across the board can be intelligently exploited to advance our knowledge about our environment, public health, critical infrastructure and security. In recent years we have developed generic approaches to process such big data at multiple levels for advancing decision-support. It specifically concerns data processing with semantic harmonisation, low level fusion, analytics, knowledge modelling with high level fusion and reasoning. Such approaches will be introduced and presented in context of the TRIDEC project results on critical oil and gas industry drilling operations and also the ongoing large eVacuate project on critical crowd behaviour detection in confined spaces.
Resumo:
Abstract Massive Open Online Courses (MOOCs) generate enormous amounts of data. The University of Southampton has run and is running dozens of MOOC instances. The vast amount of data resulting from our MOOCs can provide highly valuable information to all parties involved in the creation and delivery of these courses. However, analysing and visualising such data is a task that not all educators have the time or skills to undertake. The recently developed MOOC Dashboard is a tool aimed at bridging such a gap: it provides reports and visualisations based on the data generated by learners in MOOCs. Speakers Manuel Leon is currently a Lecturer in Online Teaching and Learning in the Institute for Learning Innovation and Development (ILIaD). Adriana Wilde is a Teaching Fellow in Electronics and Computer Science, with research interests in MOOCs and Learning Analytics. Darron Tang (4th Year BEng Computer Science) and Jasmine Cheng (BSc Mathematics & Actuarial Science and starting MSc Data Science shortly) have been working as interns over this Summer (2016) as have been developing the MOOC Dashboard.
Resumo:
Thesis (Master's)--University of Washington, 2016-08
Resumo:
In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.
Resumo:
Rigid adherence to pre-specified thresholds and static graphical representations can lead to incorrect decisions on merging of clusters. As an alternative to existing automated or semi-automated methods, we developed a visual analytics approach for performing hierarchical clustering analysis of short time-series gene expression data. Dynamic sliders control parameters such as the similarity threshold at which clusters are merged and the level of relative intra-cluster distinctiveness, which can be used to identify "weak-edges" within clusters. An expert user can drill down to further explore the dendrogram and detect nested clusters and outliers. This is done by using the sliders and by pointing and clicking on the representation to cut the branches of the tree in multiple-heights. A prototype of this tool has been developed in collaboration with a small group of biologists for analysing their own datasets. Initial feedback on the tool has been positive.
Resumo:
As usage metrics continue to attain an increasingly central role in library system assessment and analysis, librarians tasked with system selection, implementation, and support are driven to identify metric approaches that simultaneously require less technical complexity and greater levels of data granularity. Such approaches allow systems librarians to present evidence-based claims of platform usage behaviors while reducing the resources necessary to collect such information, thereby representing a novel approach to real-time user analysis as well as dual benefit in active and preventative cost reduction. As part of the DSpace implementation for the MD SOAR initiative, the Consortial Library Application Support (CLAS) division has begun test implementation of the Google Tag Manager analytic system in an attempt to collect custom analytical dimensions to track author- and university-specific download behaviors. Building on the work of Conrad , CLAS seeks to demonstrate that the GTM approach to custom analytics provides both granular metadata-based usage statistics in an approach that will prove extensible for additional statistical gathering in the future. This poster will discuss the methodology used to develop these custom tag approaches, the benefits of using the GTM model, and the risks and benefits associated with further implementation.
Resumo:
I Big Data stanno guidando una rivoluzione globale. In tutti i settori, pubblici o privati, e le industrie quali Vendita al dettaglio, Sanità, Media e Trasporti, i Big Data stanno influenzando la vita di miliardi di persone. L’impatto dei Big Data è sostanziale, ma così discreto da passare inosservato alla maggior parte delle persone. Le applicazioni di Business Intelligence e Advanced Analytics vogliono studiare e trarre informazioni dai Big Data. Si studia il passaggio dalla prima alla seconda, mettendo in evidenza aspetti simili e differenze.
Resumo:
Over the last decade, there has been a trend where water utility companies aim to make water distribution networks more intelligent in order to improve their quality of service, reduce water waste, minimize maintenance costs etc., by incorporating IoT technologies. Current state of the art solutions use expensive power hungry deployments to monitor and transmit water network states periodically in order to detect anomalous behaviors such as water leakage and bursts. However, more than 97% of water network assets are remote away from power and are often in geographically remote underpopulated areas, facts that make current approaches unsuitable for next generation more dynamic adaptive water networks. Battery-driven wireless sensor/actuator based solutions are theoretically the perfect choice to support next generation water distribution. In this paper, we present an end-to-end water leak localization system, which exploits edge processing and enables the use of battery-driven sensor nodes. Our system combines a lightweight edge anomaly detection algorithm based on compression rates and an efficient localization algorithm based on graph theory. The edge anomaly detection and localization elements of the systems produce a timely and accurate localization result and reduce the communication by 99% compared to the traditional periodic communication. We evaluated our schemes by deploying non-intrusive sensors measuring vibrational data on a real-world water test rig that have had controlled leakage and burst scenarios implemented.
Resumo:
Analytics is the technology working with the manipulation of data to produce information able to change the world we live every day. Analytics have been largely used within the last decade to cluster people’s behaviour to predict their preferences of items to buy, music to listen, movies to watch and even electoral preference. The most advanced companies succeded in controlling people’s behaviour using analytics. Despite the evidence of the super-power of analytics, they are rarely applied to the big data collected within supply chain systems (i.e. distribution network, storage systems and production plants). This PhD thesis explores the fourth research paradigm (i.e. the generation of knowledge from data) applied to supply chain system design and operations management. An ontology defining the entities and the metrics of supply chain systems is used to design data structures for data collection in supply chain systems. The consistency of this data is provided by mathematical demonstrations inspired by the factory physics theory. The availability, quantity and quality of the data within these data structures define different decision patterns. Ten decision patterns are identified, and validated on-field, to address ten different class of design and control problems in the field of supply chain systems research.
Resumo:
This thesis deals with the analysis and management of emergency healthcare processes through the use of advanced analytics and optimization approaches. Emergency processes are among the most complex within healthcare. This is due to their non-elective nature and their high variability. This thesis is divided into two topics. The first one concerns the core of emergency healthcare processes, the emergency department (ED). In the second chapter, we describe the ED that is the case study. This is a real case study with data derived from a large ED located in northern Italy. In the next two chapters, we introduce two tools for supporting ED activities. The first one is a new type of analytics model. Its aim is to overcome the traditional methods of analyzing the activities provided in the ED by means of an algorithm that analyses the ED pathway (organized as event log) as a whole. The second tool is a decision-support system, which integrates a deep neural network for the prediction of patient pathways, and an online simulator to evaluate the evolution of the ED over time. Its purpose is to provide a set of solutions to prevent and solve the problem of the ED overcrowding. The second part of the thesis focuses on the COVID-19 pandemic emergency. In the fifth chapter, we describe a tool that was used by the Bologna local health authority in the first part of the pandemic. Its purpose is to analyze the clinical pathway of a patient and from this automatically assign them a state. Physicians used the state for routing the patients to the correct clinical pathways. The last chapter is dedicated to the description of a MIP model, which was used for the organization of the COVID-19 vaccination campaign in the city of Bologna, Italy.
Resumo:
I dati sono una risorsa di valore inestimabile per tutte le organizzazioni. Queste informazioni vanno da una parte gestite tramite i classici sistemi operazionali, dall’altra parte analizzate per ottenere approfondimenti che possano guidare le scelte di business. Uno degli strumenti fondamentali a supporto delle scelte di business è il data warehouse. Questo elaborato è il frutto di un percorso di tirocinio svolto con l'azienda Injenia S.r.l. Il focus del percorso era rivolto all'ottimizzazione di un data warehouse che l'azienda vende come modulo aggiuntivo di un software di nome Interacta. Questo data warehouse, Interacta Analytics, ha espresso nel tempo notevoli criticità architetturali e di performance. L’architettura attualmente usata per la creazione e la gestione dei dati all'interno di Interacta Analytics utilizza un approccio batch, pertanto, l’obiettivo cardine dello studio è quello di trovare soluzioni alternative batch che garantiscano un risparmio sia in termini economici che di tempo, esplorando anche la possibilità di una transizione ad un’architettura streaming. Gli strumenti da utilizzare in questa ricerca dovevano inoltre mantenersi in linea con le tecnologie utilizzate per Interacta, ossia i servizi della Google Cloud Platform. Dopo una breve dissertazione sul background teorico di questa area tematica, l'elaborato si concentra sul funzionamento del software principale e sulla struttura logica del modulo di analisi. Infine, si espone il lavoro sperimentale, innanzitutto proponendo un'analisi delle criticità principali del sistema as-is, dopodiché ipotizzando e valutando quattro ipotesi migliorative batch e due streaming. Queste, come viene espresso nelle conclusioni della ricerca, migliorano di molto le performance del sistema di analisi in termini di tempistiche di elaborazione, di costo totale e di semplicità dell'architettura, in particolare grazie all'utilizzo dei servizi serverless con container e FaaS della piattaforma cloud di Google.
Resumo:
A global italian pharmaceutical company has to provide two work environments that favor different needs. The environments will allow to develop solutions in a controlled, secure and at the same time in an independent manner on a state-of-the-art enterprise cloud platform. The need of developing two different environments is dictated by the needs of the working units. Indeed, the first environment is designed to facilitate the creation of application related to genomics, therefore, designed more for data-scientists. This environment is capable of consuming, producing, retrieving and incorporating data, furthermore, will support the most used programming languages for genomic applications (e.g., Python, R). The proposal was to obtain a pool of ready-togo Virtual Machines with different architectures to provide best performance based on the job that needs to be carried out. The second environment has more of a traditional trait, to obtain, via ETL (Extract-Transform-Load) process, a global datamodel, resembling a classical relational structure. It will provide major BI operations (e.g., analytics, performance measure, reports, etc.) that can be leveraged both for application analysis or for internal usage. Since, both architectures will maintain large amounts of data regarding not only pharmaceutical informations but also internal company informations, it would be possible to digest the data by reporting/ analytics tools and also apply data-mining, machine learning technologies to exploit intrinsic informations. The thesis work will introduce, proposals, implementations, descriptions of used technologies/platforms and future works of the above discussed environments.
Resumo:
High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two-hybrid, proteomics and metabolomics datasets, but it is also extendable to other datasets. IIS is freely available online at: http://www.lge.ibi.unicamp.br/lnbio/IIS/.