365 resultados para Analytics


Relevância:

10.00% 10.00%

Publicador:

Resumo:

While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.

In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.

By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.

Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las bibliotecas universitarias recopilan, de manera rutinaria estadísticas sobre el uso de sus colecciones impresas y de la actividad in situ. Paralelamente y de manera sostenida, han ido incorporando recursos y servicios electrónicos, lo que ha motivado la elaboración de normas internacionales que definen indicadores que permiten medir su uso, no obstante contar con un software estándar es aún un asunto pendiente. Por otro lado, para medir la actividad de un sitio web existen varios programas gratuitos y de código abierto. Este trabajo tiene como objetivo determinar si los softwares de analítica web gratuitos para sitios web AWStats, Google Analytics y Piwik, pueden utilizarse para evaluar el uso de recursos y servicios electrónicos, conforme a los indicadores propuestos por las normas ANSI/NISO Z39.7-2013, ISO 2789:2003, ISO 20983:2003, BS ISO 11620:2008, EMIS, Counter e ICOLC. Para tales efectos, fueron utilizados para realizar el análisis de esta investigación sitio web y el catálogo en línea de la Biblioteca Florentino Ameghino, Biblioteca Central de la Facultad de Ciencias Naturales y Museo de la Universidad Nacional de la Plata, Argentina. Los resultados reflejan las características de los indicadores, el software y el caso de estudio. Estas características son abordadas en las conclusiones con el fin de darle contexto y perspectiva a la respuesta de la pregunta de si es viable medir el uso de recursos y servicios electrónicos de una biblioteca universitaria por medio de programas estadísticos para sitios web

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Visual cluster analysis provides valuable tools that help analysts to understand large data sets in terms of representative clusters and relationships thereof. Often, the found clusters are to be understood in context of belonging categorical, numerical or textual metadata which are given for the data elements. While often not part of the clustering process, such metadata play an important role and need to be considered during the interactive cluster exploration process. Traditionally, linked-views allow to relate (or loosely speaking: correlate) clusters with metadata or other properties of the underlying cluster data. Manually inspecting the distribution of metadata for each cluster in a linked-view approach is tedious, specially for large data sets, where a large search problem arises. Fully interactive search for potentially useful or interesting cluster to metadata relationships may constitute a cumbersome and long process. To remedy this problem, we propose a novel approach for guiding users in discovering interesting relationships between clusters and associated metadata. Its goal is to guide the analyst through the potentially huge search space. We focus in our work on metadata of categorical type, which can be summarized for a cluster in form of a histogram. We start from a given visual cluster representation, and compute certain measures of interestingness defined on the distribution of metadata categories for the clusters. These measures are used to automatically score and rank the clusters for potential interestingness regarding the distribution of categorical metadata. Identified interesting relationships are highlighted in the visual cluster representation for easy inspection by the user. We present a system implementing an encompassing, yet extensible, set of interestingness scores for categorical metadata, which can also be extended to numerical metadata. Appropriate visual representations are provided for showing the visual correlations, as well as the calculated ranking scores. Focusing on clusters of time series data, we test our approach on a large real-world data set of time-oriented scientific research data, demonstrating how specific interesting views are automatically identified, supporting the analyst discovering interesting and visually understandable relationships.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El presente ensayo versa sobre la conversación escrita que mantienen los jóvenes hoy en sus chats. Se tratan las diferentes variedades lingüísticas. La escritura de los chats equivale a una conversación: los textos escritos se convierten en textos orales, las conversaciones se transcriben y las normas lingüísticas se rompen. Esto no significa que los jóvenes no sepan cuáles son, pero en los chats, no les interesan.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El artículo analiza la figura del prosumidor desde los estudios visuales a partir de la combinación de la teoría de los actos de habla y los nuevos medios. El objetivo es evaluar si la distinción entre productores y consumidores, estrategias y tácticas de Michel de Certeau continúa siendo operativa en las interfaces gráficas de la cultura global de la información de Scott Lash. Para ello distingue dos tipos de performatividad de los actos de habla: la performatividad top-down del software, y la bottom-up de los juegos del lenguaje y las formas de vida. Estos tipos se aplican al análisis del discurso de los eslóganes que aparecen en los sitios web de las iniciativas “open” y de economía colaborativa, ya que las primeras están dedicadas a la producción de bienes inmateriales y las segundas a la producción de bienes materiales. El desarrollo muestra cómo los dos tipos de performatividad transforman el análisis textual de los estudios literarios y cinematográficos en una metodología capaz de investigar acciones materiales, humanas y no humanas. Las conclusiones describen el surgimiento de nuevas convenciones narrativas de poder y control ajenas a la ficción que apuntan a una “DIY society”.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This document describes the first bundle of core WP2 (user data analytics) client side components, including their specifications, usecases, and working prototypes. Included assets contain a description of their current status, and links to their full designs and downloadable versions. This deliverable only describes operational SW assets (even though beta) that are tested and documented. It should be noted, however, that various additional software assets (2.2d Cognitive Capacity Measurement and 2.3a Realtime Emotion Detection) are near completion for inclusion in games during the first pilot round. Those assets are still scheduled for inclusion in the final bundle deliverable D2.2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This document describes the first bundle of core WP2 (user data analytics) serverside components, including their specifications, usecases, and working prototypes. Included assets contain a description of their current status, and links to their full designs and downloadable versions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This document describes the core components to create customizable game analytics and dashboards: their present status; links to their full designs and downloadable versions; and how to configure them, and take advantage of the analytics visualizations and the underlying architecture of the platform. All the dashboard components are working with data collected using the xAPI data format that the RAGE project has developed in collaboration with ADL Co-Lab.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Clustering algorithms, pattern mining techniques and associated quality metrics emerged as reliable methods for modeling learners’ performance, comprehension and interaction in given educational scenarios. The specificity of available data such as missing values, extreme values or outliers, creates a challenge to extract significant user models from an educational perspective. In this paper we introduce a pattern detection mechanism with-in our data analytics tool based on k-means clustering and on SSE, silhouette, Dunn index and Xi-Beni index quality metrics. Experiments performed on a dataset obtained from our online e-learning platform show that the extracted interaction patterns were representative in classifying learners. Furthermore, the performed monitoring activities created a strong basis for generating automatic feedback to learners in terms of their course participation, while relying on their previous performance. In addition, our analysis introduces automatic triggers that highlight learners who will potentially fail the course, enabling tutors to take timely actions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The speed at which new scientific papers are published has increased dramatically, while the process of tracking the most recent publications having a high impact has become more and more cumbersome. In order to support learners and researchers in retrieving relevant articles and identifying the most central researchers within a domain, we propose a novel 2-mode multilayered graph derived from Cohesion Network Analysis (CNA). The resulting extended CNA graph integrates both authors and papers, as well as three principal link types: coauthorship, co-citation, and semantic similarity among the contents of the papers. Our rankings do not rely on the number of published documents, but on their global impact based on links between authors, citations, and semantic relatedness to similar articles. As a preliminary validation, we have built a network based on the 2013 LAK dataset in order to reveal the most central authors within the emerging Learning Analytics domain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software assets are key output of the RAGE project and they can be used by applied game developers to enhance the pedagogical and educational value of their games. These software assets cover a broad spectrum of functionalities – from player analytics including emotion detection to intelligent adaptation and social gamification. In order to facilitate integration and interoperability, all of these assets adhere to a common model, which describes their properties through a set of metadata. In this paper the RAGE asset model and asset metadata model is presented, capturing the detail of assets and their potential usage within three distinct dimensions – technological, gaming and pedagogical. The paper highlights key issues and challenges in constructing the RAGE asset and asset metadata model and details the process and design of a flexible metadata editor that facilitates both adaptation and improvement of the asset metadata model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The large upfront investments required for game development pose a severe barrier for the wider uptake of serious games in education and training. Also, there is a lack of well-established methods and tools that support game developers at preserving and enhancing the games’ pedagogical effectiveness. The RAGE project, which is a Horizon 2020 funded research project on serious games, addresses these issues by making available reusable software components that aim to support the pedagogical qualities of serious games. In order to easily deploy and integrate these game components in a multitude of game engines, platforms and programming languages, RAGE has developed and validated a hybrid component-based software architecture that preserves component portability and interoperability. While a first set of software components is being developed, this paper presents selected examples to explain the overall system’s concept and its practical benefits. First, the Emotion Detection component uses the learners’ webcams for capturing their emotional states from facial expressions. Second, the Performance Statistics component is an add-on for learning analytics data processing, which allows instructors to track and inspect learners’ progress without bothering about the required statistics computations. Third, a set of language processing components accommodate the analysis of textual inputs of learners, facilitating comprehension assessment and prediction. Fourth, the Shared Data Storage component provides a technical solution for data storage - e.g. for player data or game world data - across multiple software components. The presented components are exemplary for the anticipated RAGE library, which will include up to forty reusable software components for serious gaming, addressing diverse pedagogical dimensions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Graph analytics is an important and computationally demanding class of data analytics. It is essential to balance scalability, ease-of-use and high performance in large scale graph analytics. As such, it is necessary to hide the complexity of parallelism, data distribution and memory locality behind an abstract interface. The aim of this work is to build a scalable graph analytics framework that does not demand significant parallel programming experience based on NUMA-awareness.
The realization of such a system faces two key problems:
(i)~how to develop a scale-free parallel programming framework that scales efficiently across NUMA domains; (ii)~how to efficiently apply graph partitioning in order to create separate and largely independent work items that can be distributed among threads.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Educational systems worldwide are facing an enormous shift as a result of sociocultural, political, economic, and technological changes. The technologies and practices that have developed over the last decade have been heralded as opportunities to transform both online and traditional education systems. While proponents of these new ideas often postulate that they have the potential to address the educational problems facing both students and institutions and that they could provide an opportunity to rethink the ways that education is organized and enacted, there is little evidence of emerging technologies and practices in use in online education. Because researchers and practitioners interested in these possibilities often reside in various disciplines and academic departments the sharing and dissemination of their work across often rigid boundaries is a formidable task. Contributors to Emergence and Innovation in Digital Learning include individuals who are shaping the future of online learning with their innovative applications and investigations on the impact of issues such as openness, analytics, MOOCs, and social media. Building on work first published in Emerging Technologies in Distance Education, the contributors to this collection harness the dispersed knowledge in online education to provide a one-stop locale for work on emergent approaches in the field. Their conclusions will influence the adoption and success of these approaches to education and will enable researchers and practitioners to conceptualize, critique, and enhance their understanding of the foundations and applications of new technologies.