900 resultados para Google Analytics


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Big Data Analytics is an emerging field since massive storage and computing capabilities have been made available by advanced e-infrastructures. Earth and Environmental sciences are likely to benefit from Big Data Analytics techniques supporting the processing of the large number of Earth Observation datasets currently acquired and generated through observations and simulations. However, Earth Science data and applications present specificities in terms of relevance of the geospatial information, wide heterogeneity of data models and formats, and complexity of processing. Therefore, Big Earth Data Analytics requires specifically tailored techniques and tools. The EarthServer Big Earth Data Analytics engine offers a solution for coverage-type datasets, built around a high performance array database technology, and the adoption and enhancement of standards for service interaction (OGC WCS and WCPS). The EarthServer solution, led by the collection of requirements from scientific communities and international initiatives, provides a holistic approach that ranges from query languages and scalability up to mobile access and visualization. The result is demonstrated and validated through the development of lighthouse applications in the Marine, Geology, Atmospheric, Planetary and Cryospheric science domains.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Big Data Analytics is an emerging field since massive storage and computing capabilities have been made available by advanced e-infrastructures. Earth and Environmental sciences are likely to benefit from Big Data Analytics techniques supporting the processing of the large number of Earth Observation datasets currently acquired and generated through observations and simulations. However, Earth Science data and applications present specificities in terms of relevance of the geospatial information, wide heterogeneity of data models and formats, and complexity of processing. Therefore, Big Earth Data Analytics requires specifically tailored techniques and tools. The EarthServer Big Earth Data Analytics engine offers a solution for coverage-type datasets, built around a high performance array database technology, and the adoption and enhancement of standards for service interaction (OGC WCS and WCPS). The EarthServer solution, led by the collection of requirements from scientific communities and international initiatives, provides a holistic approach that ranges from query languages and scalability up to mobile access and visualization. The result is demonstrated and validated through the development of lighthouse applications in the Marine, Geology, Atmospheric, Planetary and Cryospheric science domains.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The last couple of years there has been a lot of attention for MOOCs. More and more universities start offering MOOCs. Although the open dimension of MOOC indicates that it is open in every aspect, in most cases it is a course with a structure and a timeline within which learning activities are positioned. There is a contradiction there. The open aspect puts MOOCs more in the non-formal professional learning domain, while the course structure takes it into the formal, traditional education domain. Accordingly, there is no consensus yet on solid pedagogical approaches for MOOCs. Something similar can be said for learning analytics, another upcoming concept that is receiving a lot of attention. Given its nature, learning analytics offers a large potential to support learners in particular in MOOCs. Learning analytics should then be applied to assist the learners and teachers in understanding the learning process and could predict learning, provide opportunities for pro-active feedback, but should also results in interventions aimed at improving progress. This paper illustrates pedagogical and learning analytics approaches based on practices developed in formal online and distance teaching university education that have been fine-tuned for MOOCs and have been piloted in the context of the EU-funded MOOC projects ECO (Elearning, Communication, Open-Data: http://ecolearning.eu) and EMMA (European Multiple MOOC Aggregator: http://platform.europeanmoocs.eu).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Poster presentation for our paper Brouns, F., & Firssova, O. (2016, October).The role of learning design and learning analytics in MOOCs. Paper presented at 9th EDEN Research Workshop, Oldenburg, Germany.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Learning Analytics is an emerging field focused on analyzing learners’ interactions with educational content. One of the key open issues in learning analytics is the standardization of the data collected. This is a particularly challenging issue in serious games, which generate a diverse range of data. This paper reviews the current state of learning analytics, data standards and serious games, studying how serious games are tracking the interactions from their players and the metrics that can be distilled from them. Based on this review, we propose an interaction model that establishes a basis for applying Learning Analytics into serious games. This paper then analyzes the current standards and specifications used in the field. Finally, it presents an implementation of the model with one of the most promising specifications: Experience API (xAPI). The Experience API relies on Communities of Practice developing profiles that cover different use cases in specific domains. This paper presents the Serious Games xAPI Profile: a profile developed to align with the most common use cases in the serious games domain. The profile is applied to a case study (a demo game), which explores the technical practicalities of standardizing data acquisition in serious games. In summary, the paper presents a new interaction model to track serious games and their implementation with the xAPI specification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Video games have become one of the largest entertainment industries, and their power to capture the attention of players worldwide soon prompted the idea of using games to improve education. However, these educational games, commonly referred to as serious games, face different challenges when brought into the classroom, ranging from pragmatic issues (e.g. a high development cost) to deeper educational issues, including a lack of understanding of how the students interact with the games and how the learning process actually occurs. This chapter explores the potential of data-driven approaches to improve the practical applicability of serious games. Existing work done by the entertainment and learning industries helps to build a conceptual model of the tasks required to analyze player interactions in serious games (gaming learning analytics or GLA). The chapter also describes the main ongoing initiatives to create reference GLA infrastructures and their connection to new emerging specifications from the educational technology field. Finally, it explores how this data-driven GLA will help in the development of a new generation of more effective educational games and new business models that will support their expansion. This results in additional ethical implications, which are discussed at the end of the chapter.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L'elaborato presenta Google Fusion Tables, un software che fa parte dei servizi messi a disposizione da Google, funzionale per la gestione di database. Il servizio gratuito e online ha quindi lo scopo di supportare i compiti di gestori di basi di dati e fornisce operazioni di manipolazione dei dati come estrazione, aggregazione, filtraggio e fusione. Il servizio utilizza dati strutturati, i quali sono estratti dalle pagine Web con appositi motori di ricerca come WebTables, trattato nell'elaborato. Google Fusion Tables è impiegato in ambito scientifico ed è nato per esplicitare le informazioni di ricerche scientifiche che spesso sono contenute in database e fogli di calcolo difficilmente condivisi nel Web. Questo servizio è molto pratico per le aziende, le quali possono integrare dati interni ed esterni all’organizzazione per ampliare la propria conoscenza e ottenere un vantaggio competitivo sui concorrenti. Vengono quindi presentate le caratteristiche distintive che potrebbero indurre numerose organizzazioni a scommettere su questo nuovo servizio.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-08

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La Universidad EAFIT, en los últimos años, por medio de la realización de varias investigaciones, ha estado desarrollado una propuesta con la cual se busca definir los componentes tecnológicos que deben componer un ecosistema de aplicaciones educativas, con el fin de apalancar la adopción del modelo de ubicuidad en las instituciones de educación superior -- Por medio del grupo de investigación de desarrollo e innovación en Tecnologías de la Información y las Comunicaciones (GIDITIC) ha realizado la selección de los primeros componentes del ecosistema en trabajos de tesis de grado de anteriores investigaciones[1, 2] -- Adicionalmente, algunos trabajos realizados por el gobierno local de la Alcaldía de Medellín en su proyecto de Medellín Ciudad Inteligente[3], también realizó una selección de algunos componentes que son necesarios para la implementación del portal -- Ambas iniciativas coinciden en la inclusión de un componente de registro de actividades, conocido como \Sistema de almacenamiento de experiencias" (LRS) -- Dados estos antecedentes, se pretende realizar una implementación de un LRS que cumpla con los objetivos buscados en el proyecto de la Universidad, siguiendo estándares que permitan asegurar la interoperabilidad con los otros componentes del ecosistema de aplicaciones educativas

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En lingüística, principalmente en el idioma inglés, se usa el Índice de Niebla de Gunning para determinar la legibilidad de un texto. El índice estima los años de educación formal necesarios para comprenderel texto en una primera lectura. Un Índice de 11 años apunta a una persona con el colegio finalizado, (Gunning, 1973). Analizamos en esta investigación la variación del Índice al cambiar la forma de obtener uno de los parámetros. En la fórmula original se consideran “palabras complejas” las que tienen tres o más sílabas. En su lugar utilizamos “palabras desconocidas” que son aquellas cuyo uso es poco familiar, según un corpus construido durante la investigación, partiendo de millones de libros digitalizados por Google y la Universidad de Harvard. Aunque la variación de los resultados dependerá del valor asignado para determinarsi una palabra es desconocida la investigación es pionera en el uso de un corpus para calcular el Índice de Niebla.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A quantidade de páginas disponibilizadas na Web atingiu um tamanho tão grande que se tornou impossível recuperar informações manualmente, necessitando-se de mecanismos que possam ajudar nesse processo. Nesse contexto, os mecanismos de busca podem ser considerados como uma importante categoria do ciberespaço, especialmente para a área da Ciência da Informação, porque diz respeito à organização do conhecimento nesse ambiente, de tal forma que o Google tem sido considerado a porta de entrada no ciberespaço. Isso evidencia a importância que as interfaces de tais mecanismos podem ter sobre o comportamento informacional das pessoas. Recentes pesquisas mostram que nos últimos anos novos elementos contendo dados estruturados foram inseridos nas páginas de resultados do Google o que pode criar condições para mudanças no comportamento dos usuários. Neste trabalho apresenta-se características da tecnologia de eye tracking e seu uso em User Experience, com a apresentação de alguns resultados obtidos por meio uma investigação experimental, comparando o comportamento de usuários diante das páginas de resultados do Google e Yahoo. Observou-se que nos testes com o Google os participantes precisaram de cerca de 30% a menos de tempo para se decidirem sobre a escolha do link. Acredita-se que os participantes podem ter sofrido influência do elemento conhecido como rich snippet. Os resultados mostram que a interface foi capaz de influenciar o comportamento dos participantes quanto à escolha do melhor link, evidenciando a importância da apresentação dos resultados no processo de tomada de decisão de seus usuários.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sequences of timestamped events are currently being generated across nearly every domain of data analytics, from e-commerce web logging to electronic health records used by doctors and medical researchers. Every day, this data type is reviewed by humans who apply statistical tests, hoping to learn everything they can about how these processes work, why they break, and how they can be improved upon. To further uncover how these processes work the way they do, researchers often compare two groups, or cohorts, of event sequences to find the differences and similarities between outcomes and processes. With temporal event sequence data, this task is complex because of the variety of ways single events and sequences of events can differ between the two cohorts of records: the structure of the event sequences (e.g., event order, co-occurring events, or frequencies of events), the attributes about the events and records (e.g., gender of a patient), or metrics about the timestamps themselves (e.g., duration of an event). Running statistical tests to cover all these cases and determining which results are significant becomes cumbersome. Current visual analytics tools for comparing groups of event sequences emphasize a purely statistical or purely visual approach for comparison. Visual analytics tools leverage humans' ability to easily see patterns and anomalies that they were not expecting, but is limited by uncertainty in findings. Statistical tools emphasize finding significant differences in the data, but often requires researchers have a concrete question and doesn't facilitate more general exploration of the data. Combining visual analytics tools with statistical methods leverages the benefits of both approaches for quicker and easier insight discovery. Integrating statistics into a visualization tool presents many challenges on the frontend (e.g., displaying the results of many different metrics concisely) and in the backend (e.g., scalability challenges with running various metrics on multi-dimensional data at once). I begin by exploring the problem of comparing cohorts of event sequences and understanding the questions that analysts commonly ask in this task. From there, I demonstrate that combining automated statistics with an interactive user interface amplifies the benefits of both types of tools, thereby enabling analysts to conduct quicker and easier data exploration, hypothesis generation, and insight discovery. The direct contributions of this dissertation are: (1) a taxonomy of metrics for comparing cohorts of temporal event sequences, (2) a statistical framework for exploratory data analysis with a method I refer to as high-volume hypothesis testing (HVHT), (3) a family of visualizations and guidelines for interaction techniques that are useful for understanding and parsing the results, and (4) a user study, five long-term case studies, and five short-term case studies which demonstrate the utility and impact of these methods in various domains: four in the medical domain, one in web log analysis, two in education, and one each in social networks, sports analytics, and security. My dissertation contributes an understanding of how cohorts of temporal event sequences are commonly compared and the difficulties associated with applying and parsing the results of these metrics. It also contributes a set of visualizations, algorithms, and design guidelines for balancing automated statistics with user-driven analysis to guide users to significant, distinguishing features between cohorts. This work opens avenues for future research in comparing two or more groups of temporal event sequences, opening traditional machine learning and data mining techniques to user interaction, and extending the principles found in this dissertation to data types beyond temporal event sequences.