999 resultados para Internet filtering


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The following report considers a number of key challenges the Australian Federal Government faces in designing the regulatory framework and the reach of its planned mandatory internet filter. Previous reports on the mandatory filtering scheme have concentrated on the filtering technologies, their efficacy, their cost and their likely impact on the broadband environment. This report focuses on the scope and the nature of content that is likely to be caught by the proposed filter and on identifying associated public policy implications.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Governments have traditionally censored drug-related information, both in traditional media and, in recent years, in online media. We explore Internet content regulation from a drug-policy perspective by describing the likely impacts of censoring drug websites and the parallel growth in hidden Internet services. Australia proposes a compulsory Internet filtering regime that would block websites that ‘depict, express or otherwise deal with matters of… drug misuse or addiction’ and/or ‘promote, incite or instruct in matters of crime’. In this article, we present findings from a mixed-methods study of online drug discussion. Our research found that websites dealing with drugs, that would likely be blocked by the filter, in fact contributed positively to harm reduction. Such sites helped people access more comprehensive and relevant information than was available elsewhere. Blocking these websites would likely drive drug discussion underground at a time when corporate-controlled ‘walled gardens’ (e.g. Facebook) and proprietary operating systems on mobile devices may also limit open drug discussion. At the same time, hidden Internet services, such as Silk Road, have emerged that are not affected by Internet filtering. The inability for any government to regulate Tor websites and the crypto-currency Bitcoin poses a unique challenge to drug prohibition policies.
Read More: http://informahealthcare.com/doi/full/10.3109/09687637.2012.745828

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Purpose – The purpose of this study is to explore Australian public and stakeholders views towards the regulation of the Internet and its content. The federal government called for submissions addressing their proposal, and this paper analyses these submissions for themes and provides clarity as to the Australian public and stakeholders key concerns in regards to the proposed policy. Design/methodology/approach – The paper uses a qualitative approach to analyse the public consultations to the Australian Federal Government. These documents are coded and analysed to determine negative and positive viewpoints. Findings – The research has shown, based upon the analysis of the consultation, that there was no public support for any of the measures put forward, that the Australian Federal Government in its response has not recognised this public feedback and instead has only utilised some of the qualitative feedback obtained through the public consultation process to try to justify its case to proceed with its proposals. Research limitations/implications – The study is focussed on Australia. Practical implications – The paper analyses a proposed national approach to filtering the content of the Internet and discussed the public reaction to such an approach. Social implications – The paper looks at how different parts of Australian society view Internet filtering in a positive or negative manner. Originality/value – The only study that directly looks at the viewpoint of the Australian public.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This entry discusses the origins and history of media content regulation, the reasons for content regulations, and their application to different media platforms. It discusses online content regulations and the concerns that have motivated such policies with particular reference to debates about internet filtering. It is noted that, as there is growing convergence of media content, platforms, devices, and services, the debates can be expected to shift from free speech and censorship on the internet and the social protection of internet users, to wider issues of media policy reform that include cultural policy and industry development in the digital economy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Resource, Poster and Reference for the coursework

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Is it possible to say something positive about Internet filtering in libraries and not have everyone, including your mother, call you a wild-eyed, hidebound, neo-Nazi bashi-bazouk? No, of course not, but I'm going to try to anyway.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Communication technologies shape how political activist networks are produced and maintain themselves. In Cuba, despite ideologically and physically oppressive practices by the state, a severe lack of Internet access, and extensive government surveillance, a small network of bloggers and cyberactivists has achieved international visibility and recognition for its critiques of the Cuban government. This qualitative study examines the blogger collective known as Voces Cubanas in Havana, Cuba in 2012, advancing a new approach to the study of transnational activism and the role of technology in the construction of political narrative. Voces Cubanas is analyzed as a network of connections between human and non-human actors that produces and sustains powerful political alliances. Voces Cubanas and its allies work collectively to co-produce contentious political discourses, confronting the dominant ideologies and knowledges produced by the Cuban state. Transnational alliances, the act of translation, and a host of unexpected and improvised technologies play central roles in the production of these narratives, indicating new breed of cyborg sociopolitical action reliant upon fluid and flexible networks and the act of writing. 

Relevância:

40.00% 40.00%

Publicador:

Resumo:

La tesi propone una soluzione middleware per scenari in cui i sensori producono un numero elevato di dati che è necessario gestire ed elaborare attraverso operazioni di preprocessing, filtering e buffering al fine di migliorare l'efficienza di comunicazione e del consumo di banda nel rispetto di vincoli energetici e computazionali. E'possibile effettuare l'ottimizzazione di questi componenti attraverso operazioni di tuning remoto.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O tema da dissertação é o direito humano de acesso à internet. O primeiro capítulo busca afirmar a existência desse direito e seu caráter essencial. Para isso, são apresentados fundamentos de quatro espécies. O primeiro é de direito internacional dos direitos humanos e baseia-se na análise de três documentos da Organização das Nações Unidas. O segundo é material e procura demonstrar que a internet tornou-se um instrumento indispensável à realização de diversos deveres e direitos, muitos deles humanos. Dessa forma, o acesso deve ser considerado um direito em si, dotado do mesmo status jurídico dos direitos dele dependentes. O terceiro fundamento é filosófico. Ressalta-se o aspecto comunitário da internet e demonstra-se que há um cidadão virtual que é titular de direitos e deveres na rede. Nesse momento, com base na lição de Hannah Arendt, é afirmado que se há uma dimensão digital da cidadania, deve haver um direito a adquiri-la, o que se dá pelo direito de acesso à internet. O quarto fundamento é positivo e direciona-se especificamente ao reconhecimento de um direito fundamental de acesso à internet na ordem constitucional brasileira, decorrente e não escrito. Após, é feito um estudo de direito comparado, analisando-se como a questão tem sido tratada pela lei e pela jurisprudência de diversos países. Ao final do primeiro capítulo, são apresentadas e refutadas as objeções mais comuns ao reconhecimento do direito humano de acesso à internet, incluindo a questão dos custos do direito. Afirmada a existência do direito, o segundo capítulo analisa seu conteúdo e seus limites jurídicos. Inicialmente, o direito é subdividido em uma dimensão de acesso à infraestrutura física e uma dimensão de acesso ao conteúdo. São apresentadas as principais políticas públicas brasileiras que visam a concretizar ambas as dimensões. Em um segundo momento, são estudadas hipóteses de violação do direito. Uma hipótese de lesão é a ausência do serviço em certas localidades. Outra hipótese é a censura virtual, que é dividida em função do método utilizado, se pelo hardware ou pelo software, e em função do agente que a realiza, se estatal ou privado. É analisada a constitucionalidade de penas de desconexão, perpétuas ou temporárias, e de medidas de interrupção total do serviço, em conjunto com a Lei 12.737/2012. São apresentados requisitos para que as filtragens de conteúdo na rede sejam lícitas. Coteja-se o estudado com o Projeto de Lei 2.126/2011, o chamado marco civil da internet. Por fim, é estudada a exigibilidade do direito com relação às duas dimensões.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently Distributed Denial of Service (DDoS) attacks have been identified as one of the most serious problems on the Internet. The aim of DDoS attacks is to prevent legitimate users from accessing desired resources, such as network bandwidth. Hence the immediate task of DDoS defense is to provide as much resources as possible to legitimate users when there is an attack. Unfortunately most current defense approaches can not efficiently detect and filter out attack traffic. Our approach is to find the network anomalies by using neural network, deploy the system at distributed routers, identify the attack packets, and then filter them. The marks in the IP header that are generated by a group of IP traceback schemes, Deterministic Packet Marking (DPM)/Flexible Deterministic Packet Marking (FDPM), assist this process of identifying attack packets. The experimental results show that this approach can be used to defend against both intensive and subtle DDoS attacks, and can catch DDoS attacks’ characteristic of starting from multiple sources to a single victim. According to results, we find the marks in IP headers can enhance the sensitivity and accuracy of detection, thus improve the legitimate traffic throughput and reduce attack traffic throughput. Therefore, it can perform well in filtering DDoS attack traffic precisely and effectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When the average number of spam messages received is continually increasing exponentially, both the Internet service provider and the end user suffer. The lack of an efficient solution may threaten the usability of the email as a communication means. In this paper we present a filtering mechanism applying the idea of preference ranking. This filtering mechanism will distinguish spam emails from other email on the Internet. The preference ranking gives the similarity values for nominated emails and spam emails specified by users, so that the ISP/end users can deal with spam emails at filtering points. We designed three filtering points to classify nominated emails into spam email, unsure email and legitimate email. This filtering mechanism can be applied on both middleware and at the client-side. The experiments show that high precision, recall and TCR (total cost ratio) of spam emails can be predicted for the preference based filtering mechanisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis proposes an innovative adaptive multi-classifier spam filtering model, with a grey-list analyser and a dynamic feature selection method, to overcome false-positive problems in email classification. It also presents additional techniques to minimize the added complexity. Empirical evidence indicates the success of this model over existing approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, the rapid growth of the Internet and email, there has been a dramatic growth in spam. Spam is commonly defined as unsolicited email messages and protecting email from the infiltration of spam is an important research issue. Classifications algorithms have been successfully used to filter spam, but with a certain amount of false positive trade-offs, which is unacceptable to users sometimes. This paper presents an approach of email classification to overcome the burden of analyzing technique of GL (grey list) analyzer as further refinements of synthesis based email classification technique. In this approach, we introduce a “majority voting grey list (MVGL)” analyzing technique which will analyze the GL emails by using the majority voting (MV) algorithm. We have presented two different variations of the MV system, one is simple MV (SMV) and other is the Ranked MV (RMV). Our empirical evidence proofs the improvements of this approach compared to existing GL analyzer [7].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, the rapid growth of the Internet and email, there has been a dramatic growth in spam. Spam is commonly defined as unsolicited email messages and protecting email from the infiltration of spam is an important research issue. Classifications algorithms have been successfully used to filter spam, but with a certain amount of false positive trade-offs, which is unacceptable to users sometimes. This paper presents an approach to overcome the burden of GL (grey list) analyzer as further refinements to our multi-classifier based classification model (Islam, M. and W. Zhou 2007). In this approach, we introduce a ldquomajority voting grey list (MVGL)rdquo analyzing technique which will analyze the generated GL emails by using the majority voting (MV) algorithm. We have presented two different variations of the MV system, one is simple MV (SMV) and other is the ranked MV (RMV). Our empirical evidence proofs the improvements of this approach compared to the existing GL analyzer of multi-classifier based spam filtering process.