64 resultados para seminars


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In their second year, our undergraduate web scientists undertake a group project module (WEBS2002, led by Jonathon Hare & co-taught by Su White) in which they get to apply what they learnt in the first year to a practical web-science problem, and also learn about team-working. For the project this semester, the students were provided with a large dataset of geolocated images and associated metadata collected from the Flickr website. Using this data, they were tasked with exploring what this data could tell us about New York City. In this seminar the two groups will present the outcomes of their work. Team Alpha (Wil Muskett, Mark Cole & Jiwanjot Guron) will present their work on "An exploration of deprivation in NYC through Flickr". This work aims to explore whether social deprivation can be predicted geo-spatially through the analysis of social media by exploring correlations within the Flickr data against official statistics including poverty indices and crime rates. Team Bravo (Edward Baker, Callum Rooke & Rachel Whalley) will present their work on "Determining the Impact of the Flickr Relaunch on Usage and User Behaviour in New York City". This work explores the effect of the Flickr site relaunch in 2013 and looks at how user demographics and the types of content created by the users changed with the relaunch.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

IBM provide a comprehensive academic initiative, (http://www-304.ibm.com/ibm/university/academic/pub/page/academic_initiative) to universities, providing them free of charge access to a wide range of IBM Software. As part of this initiative we are currently offering free IBM Bluemix accounts, either to be used within a course, or for students to use for personal skills development. IBM Bluemix provides a comprehensive cloud based platform as a service solution set which includes the ability to quickly and easily integrate data from devices from Internet of Things ( IoT) solutions to develop and run productive and user focused web and mobile applications. If you would be interested in hearing more about IBM and Internet of Things or you would like to discuss prospective research projects that you feel would operate well in this environment, please come along to the seminar!

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An emerging consensus in cognitive science views the biological brain as a hierarchically-organized predictive processing system. This is a system in which higher-order regions are continuously attempting to predict the activity of lower-order regions at a variety of (increasingly abstract) spatial and temporal scales. The brain is thus revealed as a hierarchical prediction machine that is constantly engaged in the effort to predict the flow of information originating from the sensory surfaces. Such a view seems to afford a great deal of explanatory leverage when it comes to a broad swathe of seemingly disparate psychological phenomena (e.g., learning, memory, perception, action, emotion, planning, reason, imagination, and conscious experience). In the most positive case, the predictive processing story seems to provide our first glimpse at what a unified (computationally-tractable and neurobiological plausible) account of human psychology might look like. This obviously marks out one reason why such models should be the focus of current empirical and theoretical attention. Another reason, however, is rooted in the potential of such models to advance the current state-of-the-art in machine intelligence and machine learning. Interestingly, the vision of the brain as a hierarchical prediction machine is one that establishes contact with work that goes under the heading of 'deep learning'. Deep learning systems thus often attempt to make use of predictive processing schemes and (increasingly abstract) generative models as a means of supporting the analysis of large data sets. But are such computational systems sufficient (by themselves) to provide a route to general human-level analytic capabilities? I will argue that they are not and that closer attention to a broader range of forces and factors (many of which are not confined to the neural realm) may be required to understand what it is that gives human cognition its distinctive (and largely unique) flavour. The vision that emerges is one of 'homomimetic deep learning systems', systems that situate a hierarchically-organized predictive processing core within a larger nexus of developmental, behavioural, symbolic, technological and social influences. Relative to that vision, I suggest that we should see the Web as a form of 'cognitive ecology', one that is as much involved with the transformation of machine intelligence as it is with the progressive reshaping of our own cognitive capabilities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this seminar slot, we will discuss Steve's research aims and plan. Massive open online courses (MOOCs) have received substantial coverage in mainstream sources, academic media, and scholarly journals, both negative and positive. Numerous articles have addressed their potential impact on Higher Education systems in general, and some have highlighted problems with the instructional quality of MOOCs, and the lack of attention to research from online learning and distance education literature in MOOC design. However, few studies have looked at the relationship between social change and the construction of MOOCs within higher education, particularly in terms of educator and learning designer practices. This study aims to use the analytical strategy of Socio-Technical Interaction Networks (STIN) to explore the extent to which MOOCs are socially shaped and their relationship to educator and learning designer practices. The study involves a multi-site case study of 3 UK MOOC-producing universities and aims to capture an empirically based, nuanced understanding of the extent to which MOOCs are socially constructed in particular contexts, and the social implications of MOOCs, especially among educators and learning designers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Reputation, influenced by ratings from past clients, is crucial for providers competing for custom. For new providers with less track record, a few negative ratings can harm their chances of growing. In the JASPR project, we aim to look at how to ensure automated reputation assessments are justified and informative. Even an honest balanced review of a service provision may still be an unreliable predictor of future performance if the circumstances differ. For example, a service may have previously relied on different sub-providers to now, or been affected by season-specific weather events. A common way to ameliorate the ratings that may not reflect future performance is by weighting by recency. We argue that better results are obtained by querying provenance records on how services are provided for the circumstances of provision, to determine the significance of past interactions. Informed by case studies in global logistics, taxi hire, and courtesy car leasing, we are going on to explore the generation of explanations for reputation assessments, which can be valuable both for clients and for providers wishing to improve their match to the market, and applying machine learning to predict aspects of service provision which may influence decisions on the appropriateness of a provider. In this talk, I will give an overview of the research conducted and planned on JASPR. Speaker Biography Dr Simon Miles Simon Miles is a Reader in Computer Science at King's College London, UK, and head of the Agents and Intelligent Systems group. He conducts research in the areas of normative systems, data provenance, and medical informatics at King's, and has published widely and manages a number of research projects in these areas. He was previously a researcher at the University of Southampton after graduating from his PhD at Warwick. He has twice been an organising committee member for the Autonomous Agents and Multi-Agent Systems conference series, and was a member of the W3C working group which published standards on interoperable provenance data in 2013.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Ordnance Survey, our national mapping organisation, collects vast amounts of high-resolution aerial imagery covering the entirety of the country. Currently, photogrammetrists and surveyors use this to manually capture real-world objects and characteristics for a relatively small number of features. Arguably, the vast archive of imagery that we have obtained portraying the whole of Great Britain is highly underutilised and could be ‘mined’ for much more information. Over the last year the ImageLearn project has investigated the potential of "representation learning" to automatically extract relevant features from aerial imagery. Representation learning is a form of data-mining in which the feature-extractors are learned using machine-learning techniques, rather than being manually defined. At the beginning of the project we conjectured that representations learned could help with processes such as object detection and identification, change detection and social landscape regionalisation of Britain. This seminar will give an overview of the project and highlight some of our research results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract There has been a great deal of interest in the area of cyber security in recent years. But what is cyber security exactly? And should society really care about it? We look at some of the challenges of being an academic working in the area of cyber security and explain why cyber security is, to put it rather simply, hard! Speaker Biography Keith Martin Prof. Keith Martin is Professor of Information Security at Royal Holloway, University of London. He received his BSc (Hons) in Mathematics from the University of Glasgow in 1988 and a PhD from Royal Holloway in 1991. Between 1992 and 1996 he held a Research Fellowship at the University of Adelaide, investigating mathematical modelling of cryptographic key distribution problems. In 1996 he joined the COSIC research group of the Katholieke Universiteit Leuven in Belgium, working on security for third generation mobile communications. Keith rejoined Royal Holloway in January 2000, became a Professor in Information Security in 2007 and was Director of the Information Security Group between 2010 and 2015. Keith's research interests range across cyber security, but with a focus on cryptographic applications. He is the author of 'Everyday Cryptography' published by Oxford University Press.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract The World Wide Web Consortium, W3C, is known for standards like HTML and CSS but there's a lot more to it than that. Mobile, automotive, publishing, graphics, TV and more. Then there are horizontal issues like privacy, security, accessibility and internationalisation. Many of these assume that there is an underlying data infrastructure to power applications. In this session, W3C's Data Activity Lead, Phil Archer, will describe the overall vision for better use of the Web as a platform for sharing data and how that translates into recent, current and possible future work. What's the difference between using the Web as a data platform and as a glorified USB stick? Why does it matter? And what makes a standard a standard anyway? Speaker Biography Phil Archer Phil Archer is Data Activity Lead at W3C, the industry standards body for the World Wide Web, coordinating W3C's work in the Semantic Web and related technologies. He is most closely involved in the Data on the Web Best Practices, Permissions and Obligations Expression and Spatial Data on the Web Working Groups. His key themes are interoperability through common terminology and URI persistence. As well as work at the W3C, his career has encompassed broadcasting, teaching, linked data publishing, copy writing, and, perhaps incongruously, countryside conservation. The common thread throughout has been a knack for communication, particularly communicating complex technical ideas to a more general audience.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Massive Open Online Courses (MOOCs) generate enormous amounts of data. The University of Southampton has run and is running dozens of MOOC instances. The vast amount of data resulting from our MOOCs can provide highly valuable information to all parties involved in the creation and delivery of these courses. However, analysing and visualising such data is a task that not all educators have the time or skills to undertake. The recently developed MOOC Dashboard is a tool aimed at bridging such a gap: it provides reports and visualisations based on the data generated by learners in MOOCs. Speakers Manuel Leon is currently a Lecturer in Online Teaching and Learning in the Institute for Learning Innovation and Development (ILIaD). Adriana Wilde is a Teaching Fellow in Electronics and Computer Science, with research interests in MOOCs and Learning Analytics. Darron Tang (4th Year BEng Computer Science) and Jasmine Cheng (BSc Mathematics & Actuarial Science and starting MSc Data Science shortly) have been working as interns over this Summer (2016) as have been developing the MOOC Dashboard.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Provenance is a record that describes the people, institutions, entities, and activities, involved in producing, influencing, or delivering a piece of data or a thing in the world. Some 10 years after beginning research on the topic of provenance, I co-chaired the provenance working group at the World Wide Web Consortium. The working group published the PROV standard for provenance in 2013. In this talk, I will present some use cases for provenance, the PROV standard and some flagship examples of adoption. I will then move on to our current research area aiming to exploit provenance, in the context of the Sociam, SmartSociety, ORCHID projects. Doing so, I will present techniques to deal with large scale provenance, to build predictive models based on provenance, and to analyse provenance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract: In the mid-1990s when I worked for a telecommunications giant I struggled to gain access to basic geodemographic data. It cost hundreds of thousands of dollars at the time to simply purchase a tile of satellite imagery from Marconi, and it was often cheaper to create my own maps using a digitizer and A0 paper maps. Everything from granular administrative boundaries to right-of-ways to points of interest and geocoding capabilities were either unavailable for the places I was working in throughout Asia or very limited. The control of this data was either in a government’s census and statistical bureau or was created by a handful of forward thinking corporations. Twenty years on we find ourselves inundated with data (location and other) that we are challenged to amalgamate, and much of it still “dirty” in nature. Open data initiatives such as ODI give us great hope for how we might be able to share information together and capitalize not only in the crowdsourcing behavior but in the implications for positive usage for the environment and for the advancement of humanity. We are already gathering and amassing a great deal of data and insight through excellent citizen science participatory projects across the globe. In early 2015, I delivered a keynote at the Data Made Me Do It conference at UC Berkeley, and in the preceding year an invited talk at the inaugural QSymposium. In gathering research for these presentations, I began to ponder on the effect that social machines (in effect, autonomous data collection subjects and objects) might have on social behaviors. I focused on studying the problem of data from various veillance perspectives, with an emphasis on the shortcomings of uberveillance which included the potential for misinformation, misinterpretation, and information manipulation when context was entirely missing. As we build advanced systems that rely almost entirely on social machines, we need to ponder on the risks associated with following a purely technocratic approach where machines devoid of intelligence may one day dictate what humans do at the fundamental praxis level. What might be the fallout of uberveillance? Bio: Dr Katina Michael is a professor in the School of Computing and Information Technology at the University of Wollongong. She presently holds the position of Associate Dean – International in the Faculty of Engineering and Information Sciences. Katina is the IEEE Technology and Society Magazine editor-in-chief, and IEEE Consumer Electronics Magazine senior editor. Since 2008 she has been a board member of the Australian Privacy Foundation, and until recently was the Vice-Chair. Michael researches on the socio-ethical implications of emerging technologies with an emphasis on an all-hazards approach to national security. She has written and edited six books, guest edited numerous special issue journals on themes related to radio-frequency identification (RFID) tags, supply chain management, location-based services, innovation and surveillance/ uberveillance for Proceedings of the IEEE, Computer and IEEE Potentials. Prior to academia, Katina worked for Nortel Networks as a senior network engineer in Asia, and also in information systems for OTIS and Andersen Consulting. She holds cross-disciplinary qualifications in technology and law.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Heading into the 2020s, Physics and Astronomy are undergoing experimental revolutions that will reshape our picture of the fabric of the Universe. The Large Hadron Collider (LHC), the largest particle physics project in the world, produces 30 petabytes of data annually that need to be sifted through, analysed, and modelled. In astrophysics, the Large Synoptic Survey Telescope (LSST) will be taking a high-resolution image of the full sky every 3 days, leading to data rates of 30 terabytes per night over ten years. These experiments endeavour to answer the question why 96% of the content of the universe currently elude our physical understanding. Both the LHC and LSST share the 5-dimensional nature of their data, with position, energy and time being the fundamental axes. This talk will present an overview of the experiments and data that is gathered, and outlines the challenges in extracting information. Common strategies employed are very similar to industrial data! Science problems (e.g., data filtering, machine learning, statistical interpretation) and provide a seed for exchange of knowledge between academia and industry. Speaker Biography Professor Mark Sullivan Mark Sullivan is a Professor of Astrophysics in the Department of Physics and Astronomy. Mark completed his PhD at Cambridge, and following postdoctoral study in Durham, Toronto and Oxford, now leads a research group at Southampton studying dark energy using exploding stars called "type Ia supernovae". Mark has many years' experience of research that involves repeatedly imaging the night sky to track the arrival of transient objects, involving significant challenges in data handling, processing, classification and analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Mandevillian intelligence is a specific form of collective intelligence in which individual cognitive vices (i.e., shortcomings, limitations, constraints and biases) are seen to play a positive functional role in yielding collective forms of cognitive success. In this talk, I will introduce the concept of mandevillian intelligence and review a number of strands of empirical research that help to shed light on the phenomenon. I will also attempt to highlight the value of the concept of mandevillian intelligence from a philosophical, scientific and engineering perspective. Inasmuch as we accept the notion of mandevillian intelligence, then it seems that the cognitive and epistemic value of a specific social or technological intervention will vary according to whether our attention is focused at the individual or collective level of analysis. This has a number of important implications for how we think about the cognitive impacts of a number of Web-based technologies (e.g., personalized search mechanisms). It also forces us to take seriously the idea that the exploitation (or even the accentuation!) of individual cognitive shortcomings could, in some situations, provide a productive route to collective forms of cognitive and epistemic success. Speaker Biography Dr Paul Smart Paul Smart is a senior research fellow in the Web and Internet Science research group at the University of Southampton in the UK. He is a Fellow of the British Computer Society, a professional member of the Association of Computing Machinery, and a member of the Cognitive Science Society. Paul’s research interests span a number of disciplines, including philosophy, cognitive science, social science, and computer science. His primary area of research interest relates to the social and cognitive implications of Web and Internet technologies. Paul received his bachelors degree in Psychology from the University of Nottingham. He also holds a PhD in Experimental Psychology from the University of Sussex.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract This seminar consists of two very different research reports by PhD students in WAIS. Hypertext Engineering, Fettling or Tinkering (Mark Anderson): Contributors to a public hypertext such as Wikipedia do not necessarily record their maintenance activities, but some specific hypertext features - such transclusion - could indicate deliberate editing with a mind to the hypertext’s long-term use. The MediaWiki software used to create Wikipedia supports transclusion, a deliberately hypertextual form of content creation which aids long terms consistency. This discusses the evidence of the use of hypertext transclusion in Wikipedia, and its implications for the coherence and stability of Wikipedia. Designing a Public Intervention - Towards a Sociotechnical Approach to Web Governance (Faranak Hardcastle): In this talk I introduce a critical and speculative design for a socio-technical intervention -called TATE (Transparency and Accountability Tracking Extension)- that aims to enhance transparency and accountability in Online Behavioural Tracking and Advertising mechanisms and practices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract: After developing many sensor networks using custom protocols to save energy and minimise code complexity - we have now experimented with standards-based designs. These use IPv6 (6LowPAN), RPL routing, Coap for interfaces and data access and protocol buffers for data encapsulation. Deployments in the Cairngorm mountains have shown the capabilities and limitations of the implementations. This seminar will outline the hardware and software we used and discuss the advantages of the more standards-based approach. At the same time we have been progressing with high quality imaging of cultural heritage using the RTIdomes - so some results and designs will be shown as well. So this seminar will cover peat-bogs to museums, binary-HTTP-like REST to 3500 year old documents written on clay.