235 resultados para internet filtering
Resumo:
The Guardian reportage of the United Kingdom Member of Parliament (MP) expenses scandal of 2009 used crowdsourcing and computational journalism techniques. Computational journalism can be broadly defined as the application of computer science techniques to the activities of journalism. Its foundation lies in computer assisted reporting techniques and its importance is increasing due to the: (a) increasing availability of large scale government datasets for scrutiny; (b) declining cost, increasing power and ease of use of data mining and filtering software; and Web 2.0; and (c) explosion of online public engagement and opinion.. This paper provides a case study of the Guardian MP expenses scandal reportage and reveals some key challenges and opportunities for digital journalism. It finds journalists may increasingly take an active role in understanding, interpreting, verifying and reporting clues or conclusions that arise from the interrogations of datasets (computational journalism). Secondly a distinction should be made between information reportage and computational journalism in the digital realm, just as a distinction might be made between citizen reporting and citizen journalism. Thirdly, an opportunity exists for online news providers to take a ‘curatorial’ role, selecting and making easily available the best data sources for readers to use (information reportage). These activities have always been fundamental to journalism, however the way in which they are undertaken may change. Findings from this paper may suggest opportunities and challenges for the implementation of computational journalism techniques in practice by digital Australian media providers, and further areas of research.
Resumo:
This paper undertakes an overview of two developments in online media that coincided with the 'year-long campaign' that was the 2007 Australian Federal election. It discusses the relatively successful use of the Internet and social media in the 'Kevin07' Australian Labor Party campaign, and contrasts this to the Liberal-National Party's faltering use of You Tube for policy announcements. It also notes the struggle for authority in interpreting polling data between the mainstream media and various online commentators, and the 'July 12 incident' at The Australian, where it engaged in strong denunciation of alleged biases and prejudices among bloggers and on political Web sites. It concludes with consideration of some wider implication for political communication and the politics-media relationship, and whether we are seeing trends towards dispersal and diversification characterising the 'third age' of political communication.
Resumo:
Internet and Web services have been used in both teaching and learning and are gaining popularity in today’s world. E-Learning is becoming popular and considered the latest advance in technology based learning. Despite the potential advantages for learning in a small country like Bhutan, there is lack of eServices at the Paro College of Education. This study investigated students’ attitudes towards online communities and frequency of access to the Internet, and how students locate and use different sources of information in their project tasks. Since improvement was at the heart of this research, an action research approach was used. Based on the idea of purposeful sampling, a semi-structured interview and observations were used as data collection instruments. 10 randomly selected students (5 girls and 5 boys) participated in this research as the controlled group. The study findings indicated that there is a lack of educational information technology services, such as e-learning at the college. Internet connection being very slow was the main barrier to learning using e-learning or accessing Internet resources. There is a strong relationship between the quality of written task and the source of the information, and between Web searching and learning. The source of information used in assignments and project work is limited to books in the library which are often outdated and of poor quality. Project tasks submitted by most of the students were of poor quality.
Resumo:
This paper identifies factors underpinning the emergence of citizen journalism, including the rise of Web 2.0, rethinking journalism as a professional ideology, the decline of ‘high modernist’ journalism, divergence between elite and popular opinion, changing revenue bases for news production, and the decline of deference in democratic societies. It will connect these issues to wider debates about the implications of journalism and news production increasingly going into the Internet environment.
Resumo:
The Internet presents a constantly evolving frontier for criminology and policing, especially in relation to online predators – paedophiles operating within the Internet for safer access to children, child pornography and networking opportunities with other online predators. The goals of this qualitative study are to undertake behavioural research – identify personality types and archetypes of online predators and compare and contrast them with behavioural profiles and other psychological research on offline paedophiles and sex offenders. It is also an endeavour to gather intelligence on the technological utilisation of online predators and conduct observational research on the social structures of online predator communities. These goals were achieved through the covert monitoring and logging of public activity within four Internet Relay Chat(rooms) (IRC) themed around child sexual abuse and which were located on the Undernet network. Five days of monitoring was conducted on these four chatrooms between Wednesday 1 to Sunday 5 April 2009; this raw data was collated and analysed. The analysis identified four personality types – the gentleman predator, the sadist, the businessman and the pretender – and eight archetypes consisting of the groomers, dealers, negotiators, roleplayers, networkers, chat requestors, posters and travellers. The characteristics and traits of these personality types and archetypes, which were extracted from the literature dealing with offline paedophiles and sex offenders, are detailed and contrasted against the online sexual predators identified within the chatrooms, revealing many similarities and interesting differences particularly with the businessman and pretender personality types. These personality types and archetypes were illustrated by selecting users who displayed the appropriate characteristics and tracking them through the four chatrooms, revealing intelligence data on the use of proxies servers – especially via the Tor software – and other security strategies such as Undernet’s host masking service. Name and age changes, which is used as a potential sexual grooming tactic was also revealed through the use of Analyst’s Notebook software and information on ISP information revealed the likelihood that many online predators were not using any safety mechanism and relying on the anonymity of the Internet. The activities of these online predators were analysed, especially in regards to child sexual grooming and the ‘posting’ of child pornography, which revealed a few of the methods in which online predators utilised new Internet technologies to sexually groom and abuse children – using technologies such as instant messengers, webcams and microphones – as well as store and disseminate illegal materials on image sharing websites and peer-to-peer software such as Gigatribe. Analysis of the social structures of the chatrooms was also carried out and the community functions and characteristics of each chatroom explored. The findings of this research have indicated several opportunities for further research. As a result of this research, recommendations are given on policy, prevention and response strategies with regards to online predators.
Resumo:
Purpose: The purpose of this review was to present an in-depth analysis of literature identifying the extent of dropout from Internet-based treatment programmes for psychological disorders, and literature exploring the variables associated with dropout from such programmes. ----- ----- Methods: A comprehensive literature search was conducted on PSYCHINFO and PUBMED with the keywords: dropouts, drop out, dropout, dropping out, attrition, premature termination, termination, non-compliance, treatment, intervention, and program, each in combination with the key words Internet and web. A total of 19 studies published between 1990 and April 2009 and focusing on dropout from Internet-based treatment programmes involving minimal therapist contact were identified and included in the review. ----- ----- Results: Dropout ranged from 2 to 83% and a weighted average of 31% of the participants dropped out of treatment. A range of variables have been examined for their association with dropout from Internet-based treatment programmes for psychological disorders. Despite the numerous variables explored, evidence on any specific variables that may make an individual more likely to drop out of Internet-based treatment is currently limited. ----- ----- Conclusions: This review highlights the need for more rigorous and theoretically guided research exploring the variables associated with dropping out of Internet-based treatment for psychological disorders.
Resumo:
This paper presents a framework for performing real-time recursive estimation of landmarks’ visual appearance. Imaging data in its original high dimensional space is probabilistically mapped to a compressed low dimensional space through the definition of likelihood functions. The likelihoods are subsequently fused with prior information using a Bayesian update. This process produces a probabilistic estimate of the low dimensional representation of the landmark visual appearance. The overall filtering provides information complementary to the conventional position estimates which is used to enhance data association. In addition to robotics observations, the filter integrates human observations in the appearance estimates. The appearance tracks as computed by the filter allow landmark classification. The set of labels involved in the classification task is thought of as an observation space where human observations are made by selecting a label. The low dimensional appearance estimates returned by the filter allow for low cost communication in low bandwidth sensor networks. Deployment of the filter in such a network is demonstrated in an outdoor mapping application involving a human operator, a ground and an air vehicle.
Resumo:
Record 8 of 29
Resumo:
This article presents a case study that shows how a creative music educator uses the internet to enable participatory performance.
Resumo:
This paper presents a modified approach to evaluate access control policy similarity and dissimilarity based on the proposal by Lin et al. (2007). Lin et al.'s policy similarity approach is intended as a filter stage which identifies similar XACML policies that can be analysed further using more computationally demanding techniques based on model checking or logical reasoning. This paper improves the approach of computing similarity of Lin et al. and also proposes a mechanism to calculate a dissimilarity score by identifying related policies that are likely to produce different access decisions. Departing from the original algorithm, the modifications take into account the policy obligation, rule or policy combining algorithm and the operators between attribute name and value. The algorithms are useful in activities involving parties from multiple security domains such as secured collaboration or secured task distribution. The algorithms allow various comparison options for evaluating policies while retaining control over the restriction level via a number of thresholds and weight factors.
Resumo:
The enforcement of Intellectual Property rights poses one of the greatest current threats to the privacy of individuals online. Recent trends have shown that the balance between privacy and intellectual property enforcement has been shifted in favour of intellectual property owners. This article discusses the ways in which the scope of preliminary discovery and Anton Piller orders have been overly expanded in actions where large amounts of electronic information is available, especially against online intermediaries (service providers and content hosts). The victim in these cases is usually the end user whose privacy has been infringed without a right of reply and sometimes without notice. This article proposes some ways in which the delicate balance can be restored, and considers some safeguards for user privacy. These safeguards include restructuring the threshold tests for discovery, limiting the scope of information disclosed, distinguishing identity discovery from information discovery, and distinguishing information preservation from preliminary discovery.
Resumo:
It is a big challenge to clearly identify the boundary between positive and negative streams for information filtering systems. Several attempts have used negative feedback to solve this challenge; however, there are two issues for using negative relevance feedback to improve the effectiveness of information filtering. The first one is how to select constructive negative samples in order to reduce the space of negative documents. The second issue is how to decide noisy extracted features that should be updated based on the selected negative samples. This paper proposes a pattern mining based approach to select some offenders from the negative documents, where an offender can be used to reduce the side effects of noisy features. It also classifies extracted features (i.e., terms) into three categories: positive specific terms, general terms, and negative specific terms. In this way, multiple revising strategies can be used to update extracted features. An iterative learning algorithm is also proposed to implement this approach on the RCV1 data collection, and substantial experiments show that the proposed approach achieves encouraging performance and the performance is also consistent for adaptive filtering as well.
Resumo:
This paper presents a novel two-stage information filtering model which combines the merits of term-based and pattern- based approaches to effectively filter sheer volume of information. In particular, the first filtering stage is supported by a novel rough analysis model which efficiently removes a large number of irrelevant documents, thereby addressing the overload problem. The second filtering stage is empowered by a semantically rich pattern taxonomy mining model which effectively fetches incoming documents according to the specific information needs of a user, thereby addressing the mismatch problem. The experiments have been conducted to compare the proposed two-stage filtering (T-SM) model with other possible "term-based + pattern-based" or "term-based + term-based" IF models. The results based on the RCV1 corpus show that the T-SM model significantly outperforms other types of "two-stage" IF models.