Visually extracting data records from the deep web


Autoria(s): Anderson, Neil; Hong, Jun
Data(s)

2013

Resumo

Web sites that rely on databases for their content are now ubiquitous. Query result pages are dynamically generated from these databases in response to user-submitted queries. Automatically extracting structured data from query result pages is a challenging problem, as the structure of the data is not explicitly represented. While humans have shown good intuition in visually understanding data records on a query result page as displayed by a web browser, no existing approach to data record extraction has made full use of this intuition. We propose a novel approach, in which we make use of the common sources of evidence that humans use to understand data records on a displayed query result page. These include structural regularity, and visual and content similarity between data records displayed on a query result page. Based on these observations we propose new techniques that can identify each data record individually, while ignoring noise items, such as navigation bars and adverts. We have implemented these techniques in a software prototype, rExtractor, and tested it using two datasets. Our experimental results show that our approach achieves significantly higher accuracy than previous approaches. Furthermore, it establishes the case for use of vision-based algorithms in the context of data extraction from web sites.

Identificador

http://pure.qub.ac.uk/portal/en/publications/visually-extracting-data-records-from-the-deep-web(2fa06eef-9682-47f9-8170-5b6b5b758801).html

Idioma(s)

eng

Publicador

ACM

Direitos

info:eu-repo/semantics/restrictedAccess

Fonte

Anderson , N & Hong , J 2013 , Visually extracting data records from the deep web . in WWW '13 Companion Proceedings of the 22nd International Conference on World Wide Web . ACM , pp. 1233-1238 , 22nd International World Wide Web Conference , Rio de Janeiro , Brazil , 13-17 May .

Palavras-Chave #/dk/atira/pure/subjectarea/asjc/1700/1705 #Computer Networks and Communications
Tipo

contributionToPeriodical