999 resultados para Web wrapper
Resumo:
Web wrapper extracts data from HTML document. The accuracy and quality of the information extracted by web wrapper relies on the structure of the HTML document. If an HTML document is changed, the web wrapper may or may not function correctly. This paper presents an Adjacency-Weight method to be used in the web wrapper extraction process or in a wrapper self-maintenance mechanism to validate web wrappers. The algorithm and data structures are illustrated by some intuitive examples.
Resumo:
Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfilment of the requirements for the degree of Master in Computer Science
Resumo:
A presente tese visa contribuir na construção de ambientes de desenvolvimento de software através da proposição de uma arquitetura reflexiva para ambiente de suporte a processo, nomeada WRAPPER (Webbased Reflective Architecture for Process suPport EnviRonment). O objetivo desta arquitetura é prover uma infra-estrutura para um ambiente de suporte a processo de software, integrando tecnologias da World Wide Web, objetos distribuídos e reflexão computacional. A motivação principal para esta arquitetura vem da necessidade de se obter maior flexibilidade na gerência de processo de software. Esta flexibilidade é obtida através do uso de objetos reflexivos que permitem a um gerente de processo obter informações e também alterar o processo de software de forma dinâmica. Para se obter um ambiente integrado, a arquitetura provê facilidades para a agregação de ferramentas CASE de plataformas e fabricantes diversos, mesmo disponibilizadas em locais remotos. A integração de ferramentas heterogêneas e distribuídas é obtida através do uso de tecnologias Web e de objetos distribuídos. Reflexão computacional é usada no ambiente tanto para extrair dados da execução do processo, quanto para permitir a adaptação do mesmo. Isto é feito através da introdução e controle de meta-objetos, no metanível da arquitetura, que podem monitorar e mesmo alterar os objetos do nível base. Como resultado, a arquitetura provê as seguintes características: flexibilidade na gerência de processo, permitindo o controle e adaptação do processo; distribuição do ambiente na Web, permitindo a distribuição de tarefas do processo de software e a integração de ferramentas em locais remotos; e heterogeneidade para agregar componentes ao ambiente, permitindo o uso de ferramentas de plataformas e fornecedores diversos. Neste contexto, o presente trabalho apresenta a estrutura da arquitetura reflexiva, bem como os mecanismos usados (e suas interações) para a modelagem e execução de processo dentro do ambiente de suporte ao processo de software.
Resumo:
Methods for accessing data on the Web have been the focus of active research over the past few years. In this thesis we propose a method for representing Web sites as data sources. We designed a Data Extractor data retrieval solution that allows us to define queries to Web sites and process resulting data sets. Data Extractor is being integrated into the MSemODB heterogeneous database management system. With its help database queries can be distributed over both local and Web data sources within MSemODB framework. ^ Data Extractor treats Web sites as data sources, controlling query execution and data retrieval. It works as an intermediary between the applications and the sites. Data Extractor utilizes a twofold “custom wrapper” approach for information retrieval. Wrappers for the majority of sites are easily built using a powerful and expressive scripting language, while complex cases are processed using Java-based wrappers that utilize specially designed library of data retrieval, parsing and Web access routines. In addition to wrapper development we thoroughly investigate issues associated with Web site selection, analysis and processing. ^ Data Extractor is designed to act as a data retrieval server, as well as an embedded data retrieval solution. We also use it to create mobile agents that are shipped over the Internet to the client's computer to perform data retrieval on behalf of the user. This approach allows Data Extractor to distribute and scale well. ^ This study confirms feasibility of building custom wrappers for Web sites. This approach provides accuracy of data retrieval, and power and flexibility in handling of complex cases. ^
Resumo:
Methods for accessing data on the Web have been the focus of active research over the past few years. In this thesis we propose a method for representing Web sites as data sources. We designed a Data Extractor data retrieval solution that allows us to define queries to Web sites and process resulting data sets. Data Extractor is being integrated into the MSemODB heterogeneous database management system. With its help database queries can be distributed over both local and Web data sources within MSemODB framework. Data Extractor treats Web sites as data sources, controlling query execution and data retrieval. It works as an intermediary between the applications and the sites. Data Extractor utilizes a two-fold "custom wrapper" approach for information retrieval. Wrappers for the majority of sites are easily built using a powerful and expressive scripting language, while complex cases are processed using Java-based wrappers that utilize specially designed library of data retrieval, parsing and Web access routines. In addition to wrapper development we thoroughly investigate issues associated with Web site selection, analysis and processing. Data Extractor is designed to act as a data retrieval server, as well as an embedded data retrieval solution. We also use it to create mobile agents that are shipped over the Internet to the client's computer to perform data retrieval on behalf of the user. This approach allows Data Extractor to distribute and scale well. This study confirms feasibility of building custom wrappers for Web sites. This approach provides accuracy of data retrieval, and power and flexibility in handling of complex cases.
Resumo:
High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two-hybrid, proteomics and metabolomics datasets, but it is also extendable to other datasets. IIS is freely available online at: http://www.lge.ibi.unicamp.br/lnbio/IIS/.
Resumo:
A utilização da web para a disponibilização de informações e serviços de órgãos governamentais para os cidadãos tem se tornado cada vez mais expressiva. Assim, a garantia de que esses conteúdos e serviços possam ser acessíveis a qualquer cidadão é imprescindível, independentemente de necessidades especiais ou de quaisquer outras barreiras. No Brasil, o Decreto-Lei nº5.296/2004 determinou que todos os órgãos governamentais deveriam adaptar seus sítios na web de acordo com critérios de acessibilidade até dezembro de 2005. Com o objetivo de verificar a evolução da acessibilidade ao longo dos anos e como foi o impacto dessa legislação, este artigo analisa a acessibilidade dos sítios dos governos estaduais brasileiros por meio de amostras coletadas entre 1996 e 2007. Foram efetuadas análises por meio de métricas, obtidas por avaliações com ferramentas automáticas. Os resultados indicam que a legislação teve pouco impacto para a melhoria real da acessibilidade dos sítios no período indicado, com uma melhora somente em 2007. Verifica-se que se faz necessário adotar políticas públicas mais efetivas para que as pessoas com necessidades especiais tenham os seus direitos para acesso a informações e aos serviços públicos na web assegurados mais amplamente.
Resumo:
With the advent and development of technology, mainly in the Internet, more and more electronic services are being offered to customers in all areas of business, especially in the offering of information services, as in virtual libraries. This article proposes a new opportunity to provide services to virtual libraries customers, presenting a methodology for the implementation of electronic services oriented by these customers' life situations. Through analytical observations of some national virtual libraries sites, it could be identified that the offer of services considering life situations and relationship interest situations can promote the service to their customers, providing greater satisfaction and, consequently, improving quality in the offer of information services. The visits to those sites and the critical analysis of the data collected during these visits, supported by bibliographic researches results, have enabled the description of this methodology, concluding that the provision of services on an isolated way or in accordance with the user's profile on sites of virtual libraries is not always enough to ensure the attendance to the needs and expectations of its customers, which suggests the offering of these services considering life situations and relationship interest situations as a complement that adds value to the business of virtual library. This becomes relevant when indicates new opportunities to provide virtual libraries services with quality, serving as a guide to the information providers managers, enabling the offering of new means to access information services by such customers, looking for pro - activity and services integration, in order to solve definitely real problems.
Resumo:
Background: A relative friability to capture a sufficiently large patient population in any one geographic location has traditionally limited research into rare diseases. Methods and Results: Clinicians interested in the rare disease lymphangioleiomyomatosis (LAM) have worked with the LAM Treatment Alliance, the MIT Media Lab, and Clozure Associates to cooperate in the design of a state-of-the-art data coordination platform that can be used for clinical trials and other research focused on the global LAM patient population. This platform is a component of a set of web-based resources, including a patient self-report data portal, aimed at accelerating research in rare diseases in a rigorous fashion. Conclusions: Collaboration between clinicians, researchers, advocacy groups, and patients can create essential community resource infrastructure to accelerate rare disease research. The International LAM Registry is an example of such an effort.
Resumo:
The dynamical discrete web (DyDW), introduced in the recent work of Howitt and Warren, is a system of coalescing simple symmetric one-dimensional random walks which evolve in an extra continuous dynamical time parameter tau. The evolution is by independent updating of the underlying Bernoulli variables indexed by discrete space-time that define the discrete web at any fixed tau. In this paper, we study the existence of exceptional (random) values of tau where the paths of the web do not behave like usual random walks and the Hausdorff dimension of the set of such exceptional tau. Our results are motivated by those about exceptional times for dynamical percolation in high dimension by Haggstrom, Peres and Steif, and in dimension two by Schramm and Steif. The exceptional behavior of the walks in the DyDW is rather different from the situation for the dynamical random walks of Benjamini, Haggstrom, Peres and Steif. For example, we prove that the walk from the origin S(0)(tau) violates the law of the iterated logarithm (LIL) on a set of tau of Hausdorff dimension one. We also discuss how these and other results should extend to the dynamical Brownian web, the natural scaling limit of the DyDW. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Introduction: Internet users are increasingly using the worldwide web to search for information relating to their health. This situation makes it necessary to create specialized tools capable of supporting users in their searches. Objective: To apply and compare strategies that were developed to investigate the use of the Portuguese version of Medical Subject Headings (MeSH) for constructing an automated classifier for Brazilian Portuguese-language web-based content within or outside of the field of healthcare, focusing on the lay public. Methods: 3658 Brazilian web pages were used to train the classifier and 606 Brazilian web pages were used to validate it. The strategies proposed were constructed using content-based vector methods for text classification, such that Naive Bayes was used for the task of classifying vector patterns with characteristics obtained through the proposed strategies. Results: A strategy named InDeCS was developed specifically to adapt MeSH for the problem that was put forward. This approach achieved better accuracy for this pattern classification task (0.94 sensitivity, specificity and area under the ROC curve). Conclusions: Because of the significant results achieved by InDeCS, this tool has been successfully applied to the Brazilian healthcare search portal known as Busca Saude. Furthermore, it could be shown that MeSH presents important results when used for the task of classifying web-based content focusing on the lay public. It was also possible to show from this study that MeSH was able to map out mutable non-deterministic characteristics of the web. (c) 2010 Elsevier Inc. All rights reserved.
Resumo:
The large amount of information in electronic contracts hampers their establishment due to high complexity. An approach inspired in Software Product Line (PL) and based on feature modelling was proposed to make this process more systematic through information reuse and structuring. By assessing the feature-based approach in relation to a proposed set of requirements, it was showed that the approach does not allow the price of services and of Quality of Services (QoS) attributes to be considered in the negotiation and included in the electronic contract. Thus, this paper also presents an extension of such approach in which prices and price types associated to Web services and QoS levels are applied. An extended toolkit prototype is also presented as well as an experiment example of the proposed approach.
Resumo:
The solution structure of robustoxin, the lethal neurotoxin from the Sydney funnel-web spider Atrax robustus, has been determined from 2D H-1 NMR data, Robustoxin is a polypeptide of 42 residues cross-linked by four disulphide bonds, the connectivities of which were determined from NMR data and trial structure calculations to be 1-15, 8-20, 14-31 and 16-42 (a 1-4/2-6/3-7/5-8 pattern), The structure consists of a small three-stranded, anti-parallel beta-sheet and a series of interlocking gamma-turns at the C-terminus. It also contains a cystine knot, thus placing it in the inhibitor cystine knot motif family of structures, which includes the omega-conotoxins and a number of plant and animal toxins and protease inhibitors. Robustoxin contains three distinct charged patches on its surface, and an extended loop that includes several aromatic and non-polar residues, Both of these structural features may play a role in its binding to the voltage-gated sodium channel. (C) 1997 Federation of European Biochemical Societies.
Resumo:
Spatial data has now been used extensively in the Web environment, providing online customized maps and supporting map-based applications. The full potential of Web-based spatial applications, however, has yet to be achieved due to performance issues related to the large sizes and high complexity of spatial data. In this paper, we introduce a multiresolution approach to spatial data management and query processing such that the database server can choose spatial data at the right resolution level for different Web applications. One highly desirable property of the proposed approach is that the server-side processing cost and network traffic can be reduced when the level of resolution required by applications are low. Another advantage is that our approach pushes complex multiresolution structures and algorithms into the spatial database engine. That is, the developer of spatial Web applications needs not to be concerned with such complexity. This paper explains the basic idea, technical feasibility and applications of multiresolution spatial databases.