70 resultados para spam


Relevância:

10.00% 10.00%

Publicador:

Resumo:

O serviço de mensagens é um dos mais importantes em uma rede. Desta forma, medidas de segurança relativas a este serviço são de suma importância. Mensagens indesejadas, tais como propaganda, em grande quantidade por dia, têm sido a preocupação de muitos administradores de redes e de sistemas no sentido de combatê-las. Estas mensagens são conhecidas por spam. Spam é o termo pelo qual é comumente conhecido o envio de mensagens eletrônicas não solicitadas a uma grande quantidade de pessoas de uma vez, geralmente com cunho publicitário, mas não exclusivamente. Este trabalho visa a descrever mecanismos de defesa (rejeição de spams) no computador responsável pelo serviço de mensagens (servidor de correio eletrônico) quando este recebe spam.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

First results of a coupled modeling and forecasting system for the pelagic fisheries are being presented. The system consists currently of three mathematically fundamentally different model subsystems: POLCOMS-ERSEM providing the physical-biogeochemical environment implemented in the domain of the North-West European shelf and the SPAM model which describes sandeel stocks in the North Sea. The third component, the SLAM model, connects POLCOMS-ERSEM and SPAM by computing the physical-biological interaction. Our major experience by the coupling model subsystems is that well-defined and generic model interfaces are very important for a successful and extendable coupled model framework. The integrated approach, simulating ecosystem dynamics from physics to fish, allows for analysis of the pathways in the ecosystem to investigate the propagation of changes in the ocean climate and lower trophic levels to quantify the impacts on the higher trophic level, in this case the sandeel population, demonstrated here on the base of hindcast data. The coupled forecasting system is tested for some typical scientific questions appearing in spatial fish stock management and marine spatial planning, including determination of local and basin scale maximum sustainable yield, stock connectivity and source/sink structure. Our presented simulations indicate that sandeels stocks are currently exploited close to the maximum sustainable yield, but large uncertainty is associated with determining stock maximum sustainable yield due to stock eigen dynamics and climatic variability. Our statistical ensemble simulations indicates that the predictive horizon set by climate interannual variability is 2–6 yr, after which only an asymptotic probability distribution of stock properties, like biomass, are predictable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2012

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A backside protein-surface imprinting process is presented herein as a novel way to generate specific synthetic antibody materials. The template is covalently bonded to a carboxylated-PVC supporting film previously cast on gold, let to interact with charged monomers and surrounded next by another thick polymer. This polymer is then covalently attached to a transducing element and the backside of this structure (supporting film plus template) is removed as a regular “tape”. The new sensing layer is exposed after the full template removal, showing a high density of re-binding positions, as evidenced by SEM. To ensure that the templates have been efficiently removed, this re-binding layer was cleaned further with a proteolytic enzyme and solution washout. The final material was named MAPS, as in the back-side reading of SPAM, because it acts as a back-side imprinting of this recent approach. It was able to generate, for the first time, a specific response to a complex biomolecule from a synthetic material. Non-imprinted materials (NIMs) were also produced as blank and were used as a control of the imprinting process. All chemical modifications were followed by electrochemical techniques. This was done on a supporting film and transducing element of both MAPS and NIM. Only the MAPS-based device responded to oxLDL and the sensing layer was insensitive to other serum proteins, such as myoglobin and haemoglobin. Linear behaviour between log(C, μg mL−1) versus charged tranfer resistance (RCT, Ω) was observed by electrochemical impedance spectroscopy (EIS). Calibrations made in Fetal Calf Serum (FCS) were linear from 2.5 to 12.5 μg mL−1 (RCT = 946.12 × log C + 1590.7) with an R-squared of 0.9966. Overall, these were promising results towards the design of materials acting close to the natural antibodies and applied to practical use of clinical interest.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

"Les prestataires techniques fournissant des services sur Internet (« FSI ») incluant le simple transporteur de documents technologiques, le prestataire offrant des services d’antémémorisation (ou services de caching) ou l’hébergeur peuvent être responsables face aux tiers ou face à leurs clients, et ce, à plusieurs niveaux. Les FSI peuvent dans certains cas être tenus responsables face aux tiers pour le caractère illicite de l’information qu’ils diffusent. Certaines informations circulant sur Internet peuvent affecter les droits d’auteur de tiers ou être diffamatoires envers certains individus et les FSI peuvent jouer un rôle dans la transmission de ces informations sur Internet. Face à leurs clients, les FSI qui ont accès à leurs renseignements personnels afin entre autres d’être en mesure d’offrir les services demandés peuvent dans certains cas être tenus responsables pour avoir fait une collecte, une utilisation ou une divulgation non autorisée de ces renseignements. Ils peuvent également être tenus responsables d’avoir fait parvenir des courriels publicitaires non sollicités à leurs clients ou pour avoir suspendu le compte d’un client qui envoie du spam dans certaines circonstances. Le présent article traite des questions de responsabilité des prestataires techniques Internet au Québec : envers les tiers en ce qui a trait au caractère illicite des documents transmis ou hébergés; et envers leurs clients relativement à leurs obligations de respect des renseignements personnels de ces clients et à leur responsabilité pour les questions relatives au spam."

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nous sommes quotidiennement envahis pour d’innombrables messages électroniques non sollicités, qu’il s’agisse d’annonces publicitaires, de virus ou encore de ce qu’on appelle désormais les métavirus. Ces derniers sont des canulars transmis aux internautes leur suggérant de poser tel ou tel geste, lequel causera des dommages plus ou moins importants au système de l’utilisateur. L’auteur se penche sur la problématique que suscitent ces métavirus au niveau de la responsabilité civile de leurs émetteurs. Il en vient à la conclusion que ce régime, bien qu’applicable en théorie, demeure mal adapté au problème notamment au niveau de la preuve des éléments de la responsabilité civile. Il faut d’abord établir la capacité de discernement (ou l’incapacité) de l’émetteur, la connaissance ou non de cet état par le destinataire et la preuve d’un comportement fautif de la part de l’émetteur voire même des deux parties. Reste à savoir quelle aurait été l’attitude raisonnable dans la situation. À noter que la victime pourrait être trouvée partiellement responsable pour ses propres dommages. Reste alors à prouver le lien de causalité entre l’acte et le dommage ce qui, compte tenu de la situation factuelle, peut s’avérer une tâche ardue. L’auteur conclut que l’opportunité d’un tel recours est très discutable puisque les coûts sont disproportionnés aux dommages et car les chances pour qu’un juge retienne la responsabilité de celui qui a envoyé le métavirus sont assez faibles. La meilleure solution, ajoute-t-il, reste la prudence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La cyberpublicité abonde sur Internet. Bandeaux publicitaires, fenêtres pop-up et « pourriels » font partie du quotidien des internautes. Pourtant, bien qu’il puisse sembler que l’anarchie y soit la règle, la publicité sur le web est encadrée, comme elle l’est d’ailleurs dans les médias traditionnels. Nous analyserons la législation qui s’applique au Québec à la publicité sur Internet : il s’agit de la Loi sur la concurrence, adoptée par le légistaleur canadien, et de la Loi sur la protection du consommateur que l’on doit au législateur québécois. Ces deux lois laissent cependant quelque chose à désirer puisqu’elles ne régissent que le contenu du message et non les formes qu’il prend sur Internet, et qu’elles ne tiennent pas compte non plus de la situation créée par la nature particulière de ce média. Pour garantir aux cyberconsommateurs une protection similaire à celle dont ils bénéficient à l’égard des autres médias, il faudra créer de nouvelles sources normatives. L’adéquation des normes avec la réalité reste toujours l’idéal à atteindre.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Les alertes que nos logiciels antivirus nous envoient ou encore les différents reportages diffusés dans les médias nous font prendre conscience de l'existence des menaces dans le cyberespace. Que ce soit les pourriels, les attaques par déni de service ou les virus, le cyberespace regorge de menaces qui persistent malgré les efforts qui sont déployés dans leur lutte. Est-ce que cela a à voir avec l'efficacité des politiques en place actuellement pour lutter contre ce phénomène? Pour y répondre, l'objectif général du présent mémoire est de vérifier quelles sont les politiques de prévention (lois anti-pourriel, partenariats publics-privés et démantèlements de botnets) qui influencent le plus fortement le taux de menaces informatiques détectées, tout en s'attardant également à l'effet de différents facteurs socio-économiques sur cette variable. Les données collectées par le logiciel antivirus de la compagnie ESET ont été utilisées. Les résultats suggèrent que les partenariats publics-privés offrant une assistance personnalisée aux internautes s'avèrent être la politique de prévention la plus efficace. Les démantèlements de botnets peuvent également s'avérer efficaces, mais seulement lorsque plusieurs acteurs/serveurs importants du réseau sont mis hors d'état de nuire. Le démantèlement du botnet Mariposa en est un bon exemple. Les résultats de ce mémoire suggèrent que la formule partenariats-démantèlements serait le choix le plus judicieux pour lutter contre les cybermenaces. Ces politiques de prévention possèdent toutes deux des méthodes efficaces pour lutter contre les menaces informatiques et c'est pourquoi elles devraient être mises en commun pour assurer une meilleure défense contre ce phénomène.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social resource sharing systems like YouTube and del.icio.us have acquired a large number of users within the last few years. They provide rich resources for data analysis, information retrieval, and knowledge discovery applications. A first step towards this end is to gain better insights into content and structure of these systems. In this paper, we will analyse the main network characteristics of two of the systems. We consider their underlying data structures – socalled folksonomies – as tri-partite hypergraphs, and adapt classical network measures like characteristic path length and clustering coefficient to them. Subsequently, we introduce a network of tag co-occurrence and investigate some of its statistical properties, focusing on correlations in node connectivity and pointing out features that reflect emergent semantics within the folksonomy. We show that simple statistical indicators unambiguously spot non-social behavior such as spam.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social resource sharing systems like YouTube and del.icio.us have acquired a large number of users within the last few years. They provide rich resources for data analysis, information retrieval, and knowledge discovery applications. A first step towards this end is to gain better insights into content and structure of these systems. In this paper, we will analyse the main network characteristics of two of these systems. We consider their underlying data structures – so-called folksonomies – as tri-partite hypergraphs, and adapt classical network measures like characteristic path length and clustering coefficient to them. Subsequently, we introduce a network of tag cooccurrence and investigate some of its statistical properties, focusing on correlations in node connectivity and pointing out features that reflect emergent semantics within the folksonomy. We show that simple statistical indicators unambiguously spot non-social behavior such as spam.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ongoing growth of the World Wide Web, catalyzed by the increasing possibility of ubiquitous access via a variety of devices, continues to strengthen its role as our prevalent information and commmunication medium. However, although tools like search engines facilitate retrieval, the task of finally making sense of Web content is still often left to human interpretation. The vision of supporting both humans and machines in such knowledge-based activities led to the development of different systems which allow to structure Web resources by metadata annotations. Interestingly, two major approaches which gained a considerable amount of attention are addressing the problem from nearly opposite directions: On the one hand, the idea of the Semantic Web suggests to formalize the knowledge within a particular domain by means of the "top-down" approach of defining ontologies. On the other hand, Social Annotation Systems as part of the so-called Web 2.0 movement implement a "bottom-up" style of categorization using arbitrary keywords. Experience as well as research in the characteristics of both systems has shown that their strengths and weaknesses seem to be inverse: While Social Annotation suffers from problems like, e. g., ambiguity or lack or precision, ontologies were especially designed to eliminate those. On the contrary, the latter suffer from a knowledge acquisition bottleneck, which is successfully overcome by the large user populations of Social Annotation Systems. Instead of being regarded as competing paradigms, the obvious potential synergies from a combination of both motivated approaches to "bridge the gap" between them. These were fostered by the evidence of emergent semantics, i. e., the self-organized evolution of implicit conceptual structures, within Social Annotation data. While several techniques to exploit the emergent patterns were proposed, a systematic analysis - especially regarding paradigms from the field of ontology learning - is still largely missing. This also includes a deeper understanding of the circumstances which affect the evolution processes. This work aims to address this gap by providing an in-depth study of methods and influencing factors to capture emergent semantics from Social Annotation Systems. We focus hereby on the acquisition of lexical semantics from the underlying networks of keywords, users and resources. Structured along different ontology learning tasks, we use a methodology of semantic grounding to characterize and evaluate the semantic relations captured by different methods. In all cases, our studies are based on datasets from several Social Annotation Systems. Specifically, we first analyze semantic relatedness among keywords, and identify measures which detect different notions of relatedness. These constitute the input of concept learning algorithms, which focus then on the discovery of synonymous and ambiguous keywords. Hereby, we assess the usefulness of various clustering techniques. As a prerequisite to induce hierarchical relationships, our next step is to study measures which quantify the level of generality of a particular keyword. We find that comparatively simple measures can approximate the generality information encoded in reference taxonomies. These insights are used to inform the final task, namely the creation of concept hierarchies. For this purpose, generality-based algorithms exhibit advantages compared to clustering approaches. In order to complement the identification of suitable methods to capture semantic structures, we analyze as a next step several factors which influence their emergence. Empirical evidence is provided that the amount of available data plays a crucial role for determining keyword meanings. From a different perspective, we examine pragmatic aspects by considering different annotation patterns among users. Based on a broad distinction between "categorizers" and "describers", we find that the latter produce more accurate results. This suggests a causal link between pragmatic and semantic aspects of keyword annotation. As a special kind of usage pattern, we then have a look at system abuse and spam. While observing a mixed picture, we suggest that an individual decision should be taken instead of disregarding spammers as a matter of principle. Finally, we discuss a set of applications which operationalize the results of our studies for enhancing both Social Annotation and semantic systems. These comprise on the one hand tools which foster the emergence of semantics, and on the one hand applications which exploit the socially induced relations to improve, e. g., searching, browsing, or user profiling facilities. In summary, the contributions of this work highlight viable methods and crucial aspects for designing enhanced knowledge-based services of a Social Semantic Web.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

What kind of science is appropriate for understanding the Facebook? How does Google find what you're looking for... ...and exactly how do they make money doing so? What structural properties might we expect any social network to have? How does your position in an economic network (dis)advantage you? How are individual and collective behavior related in complex networks? What might we mean by the economics of spam? What do game theory and the Paris subway have to do with Internet routing? What's going on in the pictures to the left and right? Networked Life looks at how our world is connected -- socially, economically, strategically and technologically -- and why it matters. The answers to the questions above are related. They have been the subject of a fascinating intersection of disciplines including computer science, physics, psychology, mathematics, economics and finance. Researchers from these areas all strive to quantify and explain the growing complexity and connectivity of the world around us, and they have begun to develop a rich new science along the way. Networked Life will explore recent scientific efforts to explain social, economic and technological structures -- and the way these structures interact -- on many different scales, from the behavior of individuals or small groups to that of complex networks such as the Internet and the global economy. This course covers computer science topics and other material that is mathematical, but all material will be presented in a way that is accessible to an educated audience with or without a strong technical background. The course is open to all majors and all levels, and is taught accordingly. There will be ample opportunities for those of a quantitative bent to dig deeper into the topics we examine. The majority of the course is grounded in scientific and mathematical findings of the past two decades or less.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An E-Learning Gateway for the latest news and information relating to Computer Crime for INFO2009

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This resource is an informational resource that attempts to inform the general public about security and privacy with using the internet.