1000 resultados para web crawling
Resumo:
Notes on Web Crawling
Resumo:
This report describes web archiving in the National Library of Finland. The National Library of Finland has been archiving Finnish web on a regular basis since 2006. Web archiving is an important part of the Library'ʹs endeavours to collect and preserve Finnish published cultural heritage. In 2010, the amount of harvested data was 200 million files, or 25 Terabytes. The report takes the reader through the relevant legislation; internal plans and policies; funding and their allocation; the practices of web archiving; arrangements for the use of the archive; and issues rising from data security, sensitive materials, &c.
Resumo:
Programming Assignment on Web Crawling
Resumo:
Search engines exploit the Web's hyperlink structure to help infer information content. The new phenomenon of personal Web logs, or 'blogs', encourage more extensive annotation of Web content. If their resulting link structures bias the Web crawling applications that search engines depend upon, there are implications for another form of annotation rapidly on the rise, the Semantic Web. We conducted a Web crawl of 160 000 pages in which the link structure of the Web is compared with that of several thousand blogs. Results show that the two link structures are significantly different. We analyse the differences and infer the likely effect upon the performance of existing and future Web agents. The Semantic Web offers new opportunities to navigate the Web, but Web agents should be designed to take advantage of the emerging link structures, or their effectiveness will diminish.
Resumo:
The Web's link structure (termed the Web Graph) is a richly connected set of Web pages. Current applications use this graph for indexing and information retrieval purposes. In contrast the relationship between Web Graph and application is reversed by letting the structure of the Web Graph influence the behaviour of an application. Presents a novel Web crawling agent, AlienBot, the output of which is orthogonally coupled to the enemy generation strategy of a computer game. The Web Graph guides AlienBot, causing it to generate a stochastic process. Shows the effectiveness of such unorthodox coupling to both the playability of the game and the heuristics of the Web crawler. In addition, presents the results of the sample of Web pages collected by the crawling process. In particular, shows: how AlienBot was able to identify the power law inherent in the link structure of the Web; that 61.74 per cent of Web pages use some form of scripting technology; that the size of the Web can be estimated at just over 5.2 billion pages; and that less than 7 per cent of Web pages fully comply with some variant of (X)HTML.
Resumo:
The objective of the PANACEA ICT-2007.2.2 EU project is to build a platform that automates the stages involved in the acquisition,production, updating and maintenance of the large language resources required by, among others, MT systems. The development of a Corpus Acquisition Component (CAC) for extracting monolingual and bilingual data from the web is one of the most innovative building blocks of PANACEA. The CAC, which is the first stage in the PANACEA pipeline for building Language Resources, adopts an efficient and distributed methodology to crawl for web documents with rich textual content in specific languages and predefined domains. The CAC includes modules that can acquire parallel data from sites with in-domain content available in more than one language. In order to extrinsically evaluate the CAC methodology, we have conducted several experiments that used crawled parallel corpora for the identification and extraction of parallel sentences using sentence alignment. The corpora were then successfully used for domain adaptation of Machine Translation Systems.
Resumo:
The recent emergence of a new generation of mobile application marketplaces has changed the business in the mobile ecosystems. The marketplaces have gathered over a million applications by hundreds of thousands of application developers and publishers. Thus, software ecosystems—consisting of developers, consumers and the orchestrator—have emerged as a part of the mobile ecosystem. This dissertation addresses the new challenges faced by mobile application developers in the new ecosystems through empirical methods. By using the theories of two-sided markets and business ecosystems as the basis, the thesis assesses monetization and value creation in the market as well as the impact of electronic Word-of-Mouth (eWOM) and developer multihoming— i. e. contributing for more than one platform—in the ecosystems. The data for the study was collected with web crawling from the three biggest marketplaces: Apple App Store, Google Play and Windows Phone Store. The dissertation consists of six individual articles. The results of the studies show a gap in monetization among the studied applications, while a majority of applications are produced by small or micro-enterprises. The study finds only weak support for the impact of eWOM on the sales of an application in the studied ecosystem. Finally, the study reveals a clear difference in the multi-homing rates between the top application developers and the rest. This has, as discussed in the thesis, an impact on the future market analyses—it seems that the smart device market can sustain several parallel application marketplaces.
Resumo:
Le Associazioni Non Profit giocano un ruolo sempre più rilevante nella vita dei cittadini e rappresentano un'importante realtà produttiva del nostro paese; molto spesso però risulta difficile trovare informazioni relative ad eventi, attività o sull'esistenza stessa di queste associazioni. Per venire in contro alle esigenze dei cittadini molte Regioni e Province mettono a disposizione degli elenchi in cui sono raccolte le informazioni relative alle varie organizzazioni che operano sul territorio. Questi elenchi però, presentano spesso grossi problemi, sia per quanto riguarda la correttezza dei dati, sia per i formati utilizzati per la pubblicazione. Questi fattori hanno portato all'idea e alla necessità di realizzare un sistema per raccogliere, sistematizzare e rendere fruibili le informazioni sulle Associazioni Non Profit presenti sul territorio, in modo che questi dati possano essere utilizzati liberamente da chiunque per scopi diversi. Il presente lavoro si pone quindi due obiettivi principali: il primo consiste nell'implementazione di un tool in grado di recuperare le informazioni sulle Associazioni Non Profit sfruttando i loro Siti Web; questo avviene per mezzo dell'utilizzo di tecniche di Web Crawling e Web Scraping. Il secondo obiettivo consiste nel pubblicare le informazioni raccolte, secondo dei modelli che ne permettano un uso libero e non vincolato; per la pubblicazione e la strutturazione dei dati è stato utilizzato un modello basato sui principi dei linked open data.
Resumo:
This poster presentation from the May 2015 Florida Library Association Conference, along with the Everglades Explorer discovery portal at http://ee.fiu.edu, demonstrates how traditional bibliographic and curatorial principles can be applied to: 1) selection, cross-walking and aggregation of metadata linking end-users to wide-spread digital resources from multiple silos; 2) harvesting of select PDFs, HTML and media for web archiving and access; 3) selection of CMS domains, sub-domains and folders for targeted searching using an API. Choosing content for this discovery portal is comparable to past scholarly practice of creating and publishing subject bibliographies, except metadata and data are housed in relational databases. This new and yet traditional capacity coincides with: Growth of bibliographic utilities (MarcEdit); Evolution of open-source discovery systems (eXtensible Catalog); Development of target-capable web crawling and archiving systems (Archive-it); and specialized search APIs (Google). At the same time, historical and technical changes – specifically the increasing fluidity and re-purposing of syndicated metadata – make this possible. It equally stems from the expansion of freely accessible digitized legacy and born-digital resources. Innovation principles helped frame the process by which the thematic Everglades discovery portal was created at Florida International University. The path -- to providing for more effective searching and co-location of digital scientific, educational and historical material related to the Everglades -- is contextualized through five concepts found within Dyer and Christensen’s “The Innovator’s DNA: Mastering the five skills of disruptive innovators (2011). The project also aligns with Ranganathan’s Laws of Library Science, especially the 4th Law -- to "save the time of the user.”
Resumo:
The growing availability and popularity of opinion rich resources on the online web resources, such as review sites and personal blogs, has made it convenient to find out about the opinions and experiences of layman people. But, simultaneously, this huge eruption of data has made it difficult to reach to a conclusion. In this thesis, I develop a novel recommendation system, Recomendr that can help users digest all the reviews about an entity and compare candidate entities based on ad-hoc dimensions specified by keywords. It expects keyword specified ad-hoc dimensions/features as input from the user and based on those features; it compares the selected range of entities using reviews provided on the related User Generated Contents (UGC) e.g. online reviews. It then rates the textual stream of data using a scoring function and returns the decision based on an aggregate opinion to the user. Evaluation of Recomendr using a data set in the laptop domain shows that it can effectively recommend the best laptop as per user-specified dimensions such as price. Recomendr is a general system that can potentially work for any entities on which online reviews or opinionated text is available.
Resumo:
Questa tesi di laurea compie uno studio sull’ utilizzo di tecniche di web crawling, web scraping e Natural Language Processing per costruire automaticamente un dataset di documenti e una knowledge base di coppie verbo-oggetto utilizzabile per la classificazione di testi. Dopo una breve introduzione sulle tecniche utilizzate verrà presentato il metodo di generazione, prima in forma teorica e generalizzabile a qualunque classificazione basata su un insieme di argomenti, e poi in modo specifico attraverso un caso di studio: il software SDG Detector. In particolare quest ultimo riguarda l’applicazione pratica del metodo esposto per costruire una raccolta di informazioni utili alla classificazione di documenti in base alla presenza di uno o più Sustainable Development Goals. La parte relativa alla classificazione è curata dal co-autore di questa applicazione, la presente invece si concentra su un’analisi di correttezza e performance basata sull’espansione del dataset e della derivante base di conoscenza.
Resumo:
This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.
Resumo:
There has been a recent surge of research looking at the reporting of food consumption on social media. The topic of alcohol consumption, however, remains poorly investigated. Social media has the potential to shed light on a topic that, traditionally, is difficult to collect fine-grained information on. One social app stands out in this regard: Untappd is an app that allows users to ‘check-in’ their consumption of beers. It operates in a similar fashion to other location-based applications, but is specifically tailored to the collection of information on beer consumption. In this paper, we explore beer consumption through the lens of social media. We crawled Untappd in real time over a period of 112 days, across 40 cities in the United States and Europe. Using this data, we shed light on the drinking habits of over 369k users. We focus on per-user and per-city characterisation, highlighting key behavioural trends.
Resumo:
High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two-hybrid, proteomics and metabolomics datasets, but it is also extendable to other datasets. IIS is freely available online at: http://www.lge.ibi.unicamp.br/lnbio/IIS/.
Resumo:
A utilização da web para a disponibilização de informações e serviços de órgãos governamentais para os cidadãos tem se tornado cada vez mais expressiva. Assim, a garantia de que esses conteúdos e serviços possam ser acessíveis a qualquer cidadão é imprescindível, independentemente de necessidades especiais ou de quaisquer outras barreiras. No Brasil, o Decreto-Lei nº5.296/2004 determinou que todos os órgãos governamentais deveriam adaptar seus sítios na web de acordo com critérios de acessibilidade até dezembro de 2005. Com o objetivo de verificar a evolução da acessibilidade ao longo dos anos e como foi o impacto dessa legislação, este artigo analisa a acessibilidade dos sítios dos governos estaduais brasileiros por meio de amostras coletadas entre 1996 e 2007. Foram efetuadas análises por meio de métricas, obtidas por avaliações com ferramentas automáticas. Os resultados indicam que a legislação teve pouco impacto para a melhoria real da acessibilidade dos sítios no período indicado, com uma melhora somente em 2007. Verifica-se que se faz necessário adotar políticas públicas mais efetivas para que as pessoas com necessidades especiais tenham os seus direitos para acesso a informações e aos serviços públicos na web assegurados mais amplamente.