18 resultados para Google Analytics
em University of Southampton, United Kingdom
Resumo:
A summary of the literature
Resumo:
In this lecture for a second year interdisciplinary course (part of the curriculum innovation programme) We explore the scope of social media analytics and look at two aspects in depth: Analysing for influence (looking at factors such as network structure, propagation of content and interaction), and analysing for trust (looking at different methods including policy, provenance and reputation - both local and global). The lecture notes include a number of short videos, which cannot be included here for copy-write reasons.
Resumo:
Wednesday 26th March 2014 Speaker(s): Dr Trung Dong Huynh Organiser: Dr Tim Chown Time: 26/03/2014 11:00-11:50 Location: B32/3077 File size: 349Mb Abstract Understanding the dynamics of a crowdsourcing application and controlling the quality of the data it generates is challenging, partly due to the lack of tools to do so. Provenance is a domain-independent means to represent what happened in an application, which can help verify data and infer their quality. It can also reveal the processes that led to a data item and the interactions of contributors with it. Provenance patterns can manifest real-world phenomena such as a significant interest in a piece of content, providing an indication of its quality, or even issues such as undesirable interactions within a group of contributors. In this talk, I will present an application-independent methodology for analysing provenance graphs, constructed from provenance records, to learn about such patterns and to use them for assessing some key properties of crowdsourced data, such as their quality, in an automated manner. I will also talk about CollabMap (www.collabmap.org), an online crowdsourcing mapping application, and show how we applied the approach above to the trust classification of data generated by the crowd, achieving an accuracy over 95%.
Resumo:
Resources from the Singapore Summer School 2014 hosted by NUS. ws-summerschool.comp.nus.edu.sg
Resumo:
Real-time geoparsing of social media streams (e.g. Twitter, YouTube, Instagram, Flickr, FourSquare) is providing a new 'virtual sensor' capability to end users such as emergency response agencies (e.g. Tsunami early warning centres, Civil protection authorities) and news agencies (e.g. Deutsche Welle, BBC News). Challenges in this area include scaling up natural language processing (NLP) and information retrieval (IR) approaches to handle real-time traffic volumes, reducing false positives, creating real-time infographic displays useful for effective decision support and providing support for trust and credibility analysis using geosemantics. I will present in this seminar on-going work by the IT Innovation Centre over the last 4 years (TRIDEC and REVEAL FP7 projects) in building such systems, and highlights our research towards improving trustworthy and credible of crisis map displays and real-time analytics for trending topics and influential social networks during major news worthy events.
Resumo:
An emerging consensus in cognitive science views the biological brain as a hierarchically-organized predictive processing system. This is a system in which higher-order regions are continuously attempting to predict the activity of lower-order regions at a variety of (increasingly abstract) spatial and temporal scales. The brain is thus revealed as a hierarchical prediction machine that is constantly engaged in the effort to predict the flow of information originating from the sensory surfaces. Such a view seems to afford a great deal of explanatory leverage when it comes to a broad swathe of seemingly disparate psychological phenomena (e.g., learning, memory, perception, action, emotion, planning, reason, imagination, and conscious experience). In the most positive case, the predictive processing story seems to provide our first glimpse at what a unified (computationally-tractable and neurobiological plausible) account of human psychology might look like. This obviously marks out one reason why such models should be the focus of current empirical and theoretical attention. Another reason, however, is rooted in the potential of such models to advance the current state-of-the-art in machine intelligence and machine learning. Interestingly, the vision of the brain as a hierarchical prediction machine is one that establishes contact with work that goes under the heading of 'deep learning'. Deep learning systems thus often attempt to make use of predictive processing schemes and (increasingly abstract) generative models as a means of supporting the analysis of large data sets. But are such computational systems sufficient (by themselves) to provide a route to general human-level analytic capabilities? I will argue that they are not and that closer attention to a broader range of forces and factors (many of which are not confined to the neural realm) may be required to understand what it is that gives human cognition its distinctive (and largely unique) flavour. The vision that emerges is one of 'homomimetic deep learning systems', systems that situate a hierarchically-organized predictive processing core within a larger nexus of developmental, behavioural, symbolic, technological and social influences. Relative to that vision, I suggest that we should see the Web as a form of 'cognitive ecology', one that is as much involved with the transformation of machine intelligence as it is with the progressive reshaping of our own cognitive capabilities.
Resumo:
Search engines - such as Google - have been characterized as "Databases of intentions". This class will focus on different aspects of intentionality on the web, including goal mining, goal modeling and goal-oriented search. Readings: M. Strohmaier, M. Lux, M. Granitzer, P. Scheir, S. Liaskos, E. Yu, How Do Users Express Goals on the Web? - An Exploration of Intentional Structures in Web Search, We Know'07 International Workshop on Collaborative Knowledge Management for Web Information Systems in conjunction with WISE'07, Nancy, France, 2007. [Web link] Readings: Automatic identification of user goals in web search, U. Lee and Z. Liu and J. Cho WWW '05: Proceedings of the 14th International World Wide Web Conference 391--400 (2005) [Web link]
Resumo:
What are ways of searching in graphs? In this class, we will discuss basics of link analysis, including Google's PageRank algorithm as an example. Readings: The PageRank Citation Ranking: Bringing Order to the Web, L. Page and S. Brin and R. Motwani and T. Winograd (1998) Stanford Tecnical Report
Resumo:
What kind of science is appropriate for understanding the Facebook? How does Google find what you're looking for... ...and exactly how do they make money doing so? What structural properties might we expect any social network to have? How does your position in an economic network (dis)advantage you? How are individual and collective behavior related in complex networks? What might we mean by the economics of spam? What do game theory and the Paris subway have to do with Internet routing? What's going on in the pictures to the left and right? Networked Life looks at how our world is connected -- socially, economically, strategically and technologically -- and why it matters. The answers to the questions above are related. They have been the subject of a fascinating intersection of disciplines including computer science, physics, psychology, mathematics, economics and finance. Researchers from these areas all strive to quantify and explain the growing complexity and connectivity of the world around us, and they have begun to develop a rich new science along the way. Networked Life will explore recent scientific efforts to explain social, economic and technological structures -- and the way these structures interact -- on many different scales, from the behavior of individuals or small groups to that of complex networks such as the Internet and the global economy. This course covers computer science topics and other material that is mathematical, but all material will be presented in a way that is accessible to an educated audience with or without a strong technical background. The course is open to all majors and all levels, and is taught accordingly. There will be ample opportunities for those of a quantitative bent to dig deeper into the topics we examine. The majority of the course is grounded in scientific and mathematical findings of the past two decades or less.
Resumo:
Slides and an essay on the Web Graph, search engines and how Google calculates Page Rank
Resumo:
Its easy to collect images from the internet for research.
Resumo:
Some of this set of resources is a verbatim copy of a google knol created by Norman Creaney of the University of Ulster. Other parts of the document contextualise the content in terms of preparing for a stage test in Legal and Professional Issues. The notes should be read in conjuction with other materials which have been provided as slides and handouts (notably handouts covering Workplace perspectives)
Resumo:
INFO2009 Assignment 2 reference list for team "Quintinlessness" - Subject: Open source software