941 resultados para Content Analytics
Resumo:
We present a Connected Learning Analytics (CLA) toolkit, which enables data to be extracted from social media and imported into a Learning Record Store (LRS), as defined by the new xAPI standard. Core to the toolkit is the notion of learner access to their own data. A number of implementational issues are discussed, and an ontology of xAPI verb/object/activity statements as they might be unified across 7 different social media and online environments is introduced. After considering some of the analytics that learners might be interested in discovering about their own processes (the delivery of which is prioritised for the toolkit) we propose a set of learning activities that could be easily implemented, and their data tracked by anyone using the toolkit and a LRS.
Resumo:
Reflective writing is an important learning task to help foster reflective practice, but even when assessed it is rarely analysed or critically reviewed due to its subjective and affective nature. We propose a process for capturing subjective and affective analytics based on the identification and recontextualisation of anomalous features within reflective text. We evaluate 2 human supervised trials of the process, and so demonstrate the potential for an automated Anomaly Recontextualisation process for Learning Analytics.
Resumo:
This thesis examines the confluence of digital technology, evolving classroom pedagogy and young people's screen use, demonstrating how screen content can be deployed, curated, and developed for effective use in contemporary classrooms. Based on four detailed case studies drawn from the candidate's professional creative practice, the research presents a set of design considerations for educational media that distill the relevance of the research for screen producers seeking to develop a more productive understanding of and engagement with the school education sector.
Resumo:
In this chapter, we explore methods for automatically generating game content—and games themselves—adapted to individual players in order to improve their playing experience or achieve a desired effect. This goes beyond notions of mere replayability and involves modeling player needs to maximize their enjoyment, involvement, and interest in the game being played. We identify three main aspects of this process: generation of new content and rule sets, measurement of this content and the player, and adaptation of the game to change player experience. This process forms a feedback loop of constant refinement, as games are continually improved while being played. Framed within this methodology, we present an overview of our recent and ongoing research in this area. This is illustrated by a number of case studies that demonstrate these ideas in action over a variety of game types, including 3D action games, arcade games, platformers, board games, puzzles, and open-world games. We draw together some of the lessons learned from these projects to comment on the difficulties, the benefits, and the potential for personalized gaming via adaptive game design.
Resumo:
One of the main challenges in data analytics is that discovering structures and patterns in complex datasets is a computer-intensive task. Recent advances in high-performance computing provide part of the solution. Multicore systems are now more affordable and more accessible. In this paper, we investigate how this can be used to develop more advanced methods for data analytics. We focus on two specific areas: model-driven analysis and data mining using optimisation techniques.
Resumo:
The International Journal of the First Year in Higher Education (Int J FYHE) began in 2010 with a specific FYHE focus and has published two issues per year with one issue linked to The International First Year in Higher Education Conference (FYHE Conference). This issue—Volume 6, Issue 1—is the last under this title. In 2015 the Journal will align to a new conference that has a broader focus on Students, Transitions, Achievement, Retention and Success (STARS). At this significant point and before we move on to the new journal, the journal team felt it was appropriate that the Feature in this final issue of the Int J FYHE should summarise the Journal’s activity over the years from 2010 to 2014.
Resumo:
The following is an edited version of a submission to the Environment and Communications Legislation Committee with reference to the Australian Broadcasting Corporation Amendment (Local Content) Bill 2014, by Brian McNair and Ben Goldsmith. The committee has now reported.
Resumo:
Background As the increasing adoption of information technology continues to offer better distant medical services, the distribution of, and remote access to digital medical images over public networks continues to grow significantly. Such use of medical images raises serious concerns for their continuous security protection, which digital watermarking has shown great potential to address. Methods We present a content-independent embedding scheme for medical image watermarking. We observe that the perceptual content of medical images varies widely with their modalities. Recent medical image watermarking schemes are image-content dependent and thus they may suffer from inconsistent embedding capacity and visual artefacts. To attain the image content-independent embedding property, we generalise RONI (region of non-interest, to the medical professionals) selection process and use it for embedding by utilising RONI’s least significant bit-planes. The proposed scheme thus avoids the need for RONI segmentation that incurs capacity and computational overheads. Results Our experimental results demonstrate that the proposed embedding scheme performs consistently over a dataset of 370 medical images including their 7 different modalities. Experimental results also verify how the state-of-the-art reversible schemes can have an inconsistent performance for different modalities of medical images. Our scheme has MSSIM (Mean Structural SIMilarity) larger than 0.999 with a deterministically adaptable embedding capacity. Conclusions Our proposed image-content independent embedding scheme is modality-wise consistent, and maintains a good image quality of RONI while keeping all other pixels in the image untouched. Thus, with an appropriate watermarking framework (i.e., with the considerations of watermark generation, embedding and detection functions), our proposed scheme can be viable for the multi-modality medical image applications and distant medical services such as teleradiology and eHealth.
Resumo:
Through ubiquitous computing and location-based social media, information is spreading outside the traditional domains of home and work into the urban environment. Digital technologies have changed the way people relate to the urban form supporting discussion on multiple levels, allowing more citizens to be heard in new ways (Fredericks et al. 2013; Houghton et al. 2014; Caldwell et al. 2013). Face-to-face and digitally mediated discussions, facilitated by tangible and hybrid interaction, such as multi-touch screens and media façades, are initiated through a telephone booth inspired portable structure: The InstaBooth. The InstaBooth prototype employs a multidisciplinary approach to engage local communities in a situated debate on the future of their urban environment. With it, we capture citizens’ past stories and opinions on the use and design of public places. The way public consultations are currently done often engages only a section of the population involved in a proposed development; the more vocal citizens are not necessarily the more representative of the communities (Jenkins 2006). Alternative ways to engage urban dwellers in the debate about the built environment are explored at the moment, including the use of social media or online tools (Foth 2009). This project fosters innovation by providing pathways for communities to participate in the decision making process that informs the urban form. The InstaBooth promotes dialogue and mediation between a bottom-up and a top-down approach to urban design, with the aim of promoting community connectedness with the urban environment. The InstaBooth provides an engagement and discussion platform that leverages a number of locally developed display and interaction technologies in order to facilitate a dialogue of ideas and commentary. The InstaBooth combines multiple interaction techniques into a hybrid (digital and analogue) media space. Through the InstaBooth, urban design and architectural proposals are displayed encouraging commentary from visitors. Inside the InstaBooth, visitors can activate a multi-touch screen in order to browse media, write a note, or draw a picture to provide feedback. The purpose of the InstaBooth is to engage with a broader section of society, including those who are often marginalised. The specific design of the internal and external interfaces, the mutual relationship between these interfaces with regards to information display and interaction, and the question how visitors can engage with the system, are part of the research agenda of the project.
Resumo:
The only effective and scalable way to regulate the actions of people on the internet is through online intermediaries. These are the institutions that facilitate communication: internet service providers, search engines, content hosts, and social networks. Governments, private firms, and civil society organisations are increasingly seeking to influence these intermediaries to take more responsibility to prevent or respond to IP infringements. Around the world, intermediaries are increasingly subject to a variety of obligations to help enforce IP rights, ranging from informal social and governmental pressure, to industry codes and private negotiated agreements, to formal legislative schemes. This paper provides an overview of this emerging shift in regulatory approaches, away from legal liability and towards increased responsibilities for intermediaries. This shift straddles two different potential futures: an optimistic set of more effective, more efficient mechanisms for regulating user behaviour, and a dystopian vision of rule by algorithm and private power, without the legitimising influence of the rule of law.
Resumo:
Currently we are facing an overburdening growth of the number of reliable information sources on the Internet. The quantity of information available to everyone via Internet is dramatically growing each year [15]. At the same time, temporal and cognitive resources of human users are not changing, therefore causing a phenomenon of information overload. World Wide Web is one of the main sources of information for decision makers (reference to my research). However our studies show that, at least in Poland, the decision makers see some important problems when turning to Internet as a source of decision information. One of the most common obstacles raised is distribution of relevant information among many sources, and therefore need to visit different Web sources in order to collect all important content and analyze it. A few research groups have recently turned to the problem of information extraction from the Web [13]. The most effort so far has been directed toward collecting data from dispersed databases accessible via web pages (related to as data extraction or information extraction from the Web) and towards understanding natural language texts by means of fact, entity, and association recognition (related to as information extraction). Data extraction efforts show some interesting results, however proper integration of web databases is still beyond us. Information extraction field has been recently very successful in retrieving information from natural language texts, however it is still lacking abilities to understand more complex information, requiring use of common sense knowledge, discourse analysis and disambiguation techniques.