981 resultados para anonimato rete privacy deep web onion routing cookie
Resumo:
CAP.1: introduzione sulla creazione di Internet e la sua diffusione; CAP.2: panoramica sui dati reperibili online e sugli strumenti attraverso cui è possibile estrarne informazioni; CAP.3: concetti di privacy ed anonimato applicati ad Internet, alcune normative, sintesi su cookie e spyware; CAP.4: deep web, cos'è e come raggiungerlo; CAP.5: TOR project, elenco delle componenti, spiegazione del protocollo per la creazione di connessioni anonime, particolarità ed aspetti problematici; CAP.6: conclusioni; carrellata dei progetti correlati a TOR, statistiche sull'uso dell'Internet anonimo, considerazioni sugli effetti dell'anonimato sul sociale e sull'inviolabilità di questo sistema.
Resumo:
Un argomento di attualità è la privacy e la sicurezza in rete. La tesi, attraverso lo studio di diversi documenti e la sperimentazione di applicazioni per garantire l'anonimato, analizza la situazione attuale. La nostra privacy è compromessa e risulta importante una sensibilizzazione globale.
Resumo:
Trabalho baseado no relatório para a disciplina “Sociologia das Novas Tecnologias de Informação” no âmbito do Mestrado Integrado de Engenharia e Gestão Industrial, da Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa em 2015-16. O trabalho foi orientado pelo Prof. António Brandão Moniz do Departamento de Ciências Sociais Aplicadas (DCSA) na mesma Faculdade.
Resumo:
En el Siglo XXI, donde las Nuevas Tecnologías de la Información y Comunicación están a la orden del día, se suceden manifestaciones que tradicionalmente abarcaban otros entornos menos virtuales y engloban grupos etarios desconocidos en los que la diferenciación de género es manifiesta. Con la facilidad de acceso y conexión a Internet, muchos disponemos de herramientas suficientes como para escribir en Redes Sociales determinadas emociones en su grado extremo así como ideaciones suicidas. Sin embargo, hay ubicaciones más profundas y desconocidas por algunos usuarios, como la Deep Web (y su navegador Tor), que permiten un completo anonimato del usuario. Por tanto, surge necesidad de la creación de un corpus de mensajes de índole suicida y relacionados con las emociones profundas con el fin de analizar el léxico mediante el lenguaje computacional y una previa categorización de los resultados con el fin de fomentar la creación de programas que detecten estas manifestaciones y ejerzan una labor preventiva.
Resumo:
Educare le nuove generazioni all'uso consapevole del web è un'attività di notevole importanza, visto l'utilizzo sempre più intenso che la società contemporanea fa di questo insieme numeroso e variegato di tecnologie, identificato dalla parola "rete''. La rete quindi non è più paragonabile soltanto a un luogo virtuale da visitare ma quasi ad un'"atmosfera" che circonda la realtà stessa, rendendo costantemente vicine possibilità e preoccupazioni sempre nuove. Gli utenti della rete, siano essi "nativi" o "immigrati", si trovano a contatto con questo ambiente mutevole e rapidissimo e ne apprendono regole e usi, riportandoli a volte anche sulla realtà, con maggiore o minore disinvoltura. Le giovani generazioni sono particolarmente permeabili a questo tipo di apprendimento e dimostrano sempre maggiore affinità con lo strumento web, a volte rischiando di confondere il virtuale con il reale. La deriva valoriale e ideologica della società europea ed italiana lascia però dei vuoti che vengono spesso colmati dalle relazioni che sono loro più prossime. Anche quelle on the cloud, nel bene e nel male. Il rischio di scambiare il mezzo con il fine, poi, è sempre presente. La sfida per il sistema educativo, familiare, sociale, è dimostrarsi attento, essere aggiornato sulle tecnologie, è saper valutare attentamente, saper discernere e riconoscere le opere dai loro frutti, per poter condividere l'esperienza umana in tutti i suoi aspetti con coloro che, più di tutti, vanno in cerca di risposte profonde alle loro domande. Non bisogna aver paura di mettersi in gioco, talvolta anche di rischiare, perché la posta in gioco è altissima, è in gioco lo stesso rapporto di scambio e di fiducia tra generazioni differenti. Le maglie delle nostra "rete" di relazioni, di rapporti, di comunicazione, crescono sempre di numero, ma non bisogna promuovere la sola quantità a scapito della qualità della loro sostanza, soprattutto nel caso degli adolescenti. Concludendo, ritengo che nell'educazione al web siano fondamentali: l'attenzione, l'ascolto reciproco, la cura dei dettagli e l'attenzione al rispetto delle regole. La precisione del controllo, il senso del limite e il valore prospettico delle aspettative sono strumenti imprescindibili per costruire giorno dopo giorno una "rete" formata per le persone e non soltanto delle persone formate per la rete.
Resumo:
With the evolution of the P2P research eld, new problems, such as those related with information security, have arisen. It is important to provide security mechanisms to P2P systems, since it has already become one of the key issues when evaluating them. However, even though many P2P systems have been adapted to provide a security baseline to their underlying applications, more advanced capabilities are becoming necessary. Speci cally, privacy preservation and anonymity are deemed essential to make the information society sustainable. Unfortunately, sometimes, it may be di cult to attain anonymity unless it is included into the system's initial design. The JXTA open protocols speci cation is a good example of this kind of scenario. This work studies how to provide anonymity to JXTA's architecture in a feasible manner and proposes an extension which allows deployed services to process two-way messaging without disclosing the endpoints'identities to third parties.
Resumo:
In recent years, Semantic Web (SW) research has resulted in significant outcomes. Various industries have adopted SW technologies, while the ‘deep web’ is still pursuing the critical transformation point, in which the majority of data found on the deep web will be exploited through SW value layers. In this article we analyse the SW applications from a ‘market’ perspective. We are setting the key requirements for real-world information systems that are SW-enabled and we discuss the major difficulties for the SW uptake that has been delayed. This article contributes to the literature of SW and knowledge management providing a context for discourse towards best practices on SW-based information systems.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
This is the original "onion routing" paper, it explains in the details how onions are built and work. This is optional reading, although I strongly advise you to read the Introduction and section 3 "Onions"
Resumo:
Comunicação apresentada na 6ª CAPSI - Conferência da Associação Portuguesa de Sistemas de Informação - Escola Superior de Tecnologia de Bragança, 26-28 de Outubro.
Resumo:
JXTA is a peer-to-peer (P2P) middleware whichhas undergone successive iterations through its 10 years of history, slowly incorporating a security baseline that may cater to different applications and services. However, in order to appeal to a broader set of secure scenarios, it would be interesting to take into consideration more advanced capabilities, such as anonymity.There are several proposals on anonymous protocols that can be applied in the context of a P2P network, but it is necessary to be able to choose the right one given each application¿s needs. In this paper, we provide an experimental evaluation of two relevant protocols, each one belonging to a different category of approaches to anonymity: unimessage and split message. Webase our analysis on two scenarios, with stable and non-stable peers, and three metrics: round trip-time (RTT), node processing time and reliability.
Resumo:
La expansión de las tecnologías de la información y las comunicaciones (TIC) ha traído muchas ventajas, pero también algunos peligros. Son frecuentes hoy en día las noticias sobre delitos relacionados con las TIC. Se usa a menudo el término cibercrimen y el de ciberterrorismo pero, ¿realmente son una amenaza para la sociedad?. Este trabajo realiza un análisis del cibercrimen y el ciberterrorismo. Para ello se hace un estudio en profundidad desde distintos puntos de vista. En primer lugar se analizan varios aspectos básicos de la materia: el contexto en el que se desarrollan estas actividades, el ciberespacio y sus características, las ventajas que tiene el cibercrimen respecto a la delincuencia tradicional, características y ejemplos de ciberterrorismo y la importancia de la protección de las infraestructuras críticas. Luego se realiza un estudio del mundo del cibercrimen, en el cual se muestran los distintos tipos de cibercriminales, los actos delictivos, herramientas y técnicas más habituales usadas por el cibercrimen, la web profunda y la criptomoneda; se indican asimismo varios de los grupos criminales más conocidos y algunas de sus acciones, y se realiza un estudio de las consecuencias económicas del cibercrimen. Finalmente se hace un repaso a los medios legales que distintos países y organizaciones han establecido para combatir estos hechos delictivos. Para ello se analizan estrategias de seguridad de distinto tipo aprobadas en multitud de países de todo el mundo y los grupos operativos de respuesta (tanto los de tipo policial como los CSIRT/CERT), además de la legislación publicada para poder perseguir el cibercrimen y el ciberterrorismo, con especial atención a la legislación española. De esta manera, tras la lectura de este Proyecto se puede tener una visión global completa del mundo de la ciberdelincuencia y el ciberterrorismo. ABSTRACT. The expansion of Information and Communications Technology (ITC) has brought many benefits, but also some dangers. It is very usual nowadays to see news about ITC-related crimes. Terms like cyber crime and cyber terrorism are usually used but, are they really a big threat for our society?. This work analyzes cyber crime and cyber terrorism. To achieve it, a deep research under different points of view is made. First, basic aspects of the topic are analyzed: the context where these activities are carried out, cyber space and its features, benefits for cyber criminals with respect to traditional crime, characteristics and relevant examples of cyber terrorism, and importance of critical infrastructures protection. Then, a study about the world of cyber crime is made, analyzing the typology of different kinds of cyber criminals, the most common criminal acts, tools and techniques used by cyber crime, and the deep web and cryptocurrency. Some of the most known criminal groups and their activities are also explored, and the economic consequences of cyber crime are assessed. Finally, there is a review of the legal means used by countries and organizations to fight against these unlawful acts; this includes the analysis of several types of security strategies approved by countries all around the world, operational response groups (including law enforcement and CSIRT/CERT) and legislation to fight cyber crime and cyber terrorism, with special emphasis on Spanish legal rules. This way, a global, complete view of the world around cyber crime and cyber terrorism can be obtained after reading this work.
Resumo:
El creciente número de sitios y recursos en Internet ha dificultado el proceso de búsqueda y recuperación de información útil para la formación e investigación. Aunque la Red cuenta con importantísimas fuentes, es desconocida por los motores de búsqueda tradicionales, a esta parte se le llama Red Profunda. Para acceder a este pequeño universo de Internet es necesario conocer los mecanismos, estrategias y herramientas que faciliten y garanticen el logro de nuestros objetivos.
Resumo:
Recent years have seen an astronomical rise in SQL Injection Attacks (SQLIAs) used to compromise the confidentiality, authentication and integrity of organisations’ databases. Intruders becoming smarter in obfuscating web requests to evade detection combined with increasing volumes of web traffic from the Internet of Things (IoT), cloud-hosted and on-premise business applications have made it evident that the existing approaches of mostly static signature lack the ability to cope with novel signatures. A SQLIA detection and prevention solution can be achieved through exploring an alternative bio-inspired supervised learning approach that uses input of labelled dataset of numerical attributes in classifying true positives and negatives. We present in this paper a Numerical Encoding to Tame SQLIA (NETSQLIA) that implements a proof of concept for scalable numerical encoding of features to a dataset attributes with labelled class obtained from deep web traffic analysis. In the numerical attributes encoding: the model leverages proxy in the interception and decryption of web traffic. The intercepted web requests are then assembled for front-end SQL parsing and pattern matching by applying traditional Non-Deterministic Finite Automaton (NFA). This paper is intended for a technique of numerical attributes extraction of any size primed as an input dataset to an Artificial Neural Network (ANN) and statistical Machine Learning (ML) algorithms implemented using Two-Class Averaged Perceptron (TCAP) and Two-Class Logistic Regression (TCLR) respectively. This methodology then forms the subject of the empirical evaluation of the suitability of this model in the accurate classification of both legitimate web requests and SQLIA payloads.
Resumo:
Es presenta un sistema acurat de càlcul web d'itineraris mínims (per a vianants i per a vehicles) entre dos punts de la ciutat de Barcelona, un dels quals es triat per l'usuari directament a sobre un mapa i l'altre, alternativament, a sobre el mateix mapa o bé a sobre una llista de selecció de les principals atraccions turístiques de Barcelona.El sistema es troba implementat per medi de MapServer (1) com a servidor, OpenLayers (2) per a la interfície d'usuari, una base de dades PostrgreSQL (3)/PostGIS (4) que recull dades d'OpenStreetMaps (5) per a la navegació i dades introduïdes manualment, per a la llista de selecció d'atraccions turístiques. Per al càlcul d'itineraris es fa servir, pgRouting (6) alhora que s'accedeix a la cartografia de CartoCiudad (7) per a mostrar un mapa de base i opcionalment els noms dels carrers i punts d'interès a partir de les capes FondoUrbano, Vial i Topónimo del servidor WMS de CartoCiudad.Tot el sistema corre a sobre Windows 7 Home Premium (8).Així mateix es presenten quatre noves funcions i un tipus definit per l'usuari de PostgreSQL per al càlcul acurat d'itineraris mínims i l'estudi teòric que justifica la seva bondat.