917 resultados para Web page


Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the size and state of the Internet today, a good quality approach to organizing this mass of information is of great importance. Clustering web pages into groups of similar documents is one approach, but relies heavily on good feature extraction and document representation as well as a good clustering approach and algorithm. Due to the changing nature of the Internet, resulting in a dynamic dataset, an incremental approach is preferred. In this work we propose an enhanced incremental clustering approach to develop a better clustering algorithm that can help to better organize the information available on the Internet in an incremental fashion. Experiments show that the enhanced algorithm outperforms the original histogram based algorithm by up to 7.5%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many web sites incorporate dynamic web pages to deliver customized contents to their users. However, dynamic pages result in increased user response times due to their construction overheads. In this paper, we consider mechanisms for reducing these overheads by utilizing the excess capacity with which web servers are typically provisioned. Specifically, we present a caching technique that integrates fragment caching with anticipatory page pre-generation in order to deliver dynamic pages faster during normal operating situations. A feedback mechanism is used to tune the page pre-generation process to match the current system load. The experimental results from a detailed simulation study of our technique indicate that, given a fixed cache budget, page construction speedups of more than fifty percent can be consistently achieved as compared to a pure fragment caching approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automatically determining and assigning shared and meaningful text labels to data extracted from an e-Commerce web page is a challenging problem. An e-Commerce web page can display a list of data records, each of which can contain a combination of data items (e.g. product name and price) and explicit labels, which describe some of these data items. Recent advances in extraction techniques have made it much easier to precisely extract individual data items and labels from a web page, however, there are two open problems: 1. assigning an explicit label to a data item, and 2. determining labels for the remaining data items. Furthermore, improvements in the availability and coverage of vocabularies, especially in the context of e-Commerce web sites, means that we now have access to a bank of relevant, meaningful and shared labels which can be assigned to extracted data items. However, there is a need for a technique which will take as input a set of extracted data items and assign automatically to them the most relevant and meaningful labels from a shared vocabulary. We observe that the Information Extraction (IE) community has developed a great number of techniques which solve problems similar to our own. In this work-in-progress paper we propose our intention to theoretically and experimentally evaluate different IE techniques to ascertain which is most suitable to solve this problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In spite of the increasing presence of Semantic Web Facilities, only a limited amount of the available resources in the Internet provide a semantic access. Recent initiatives such as the emerging Linked Data Web are providing semantic access to available data by porting existing resources to the semantic web using different technologies, such as database-semantic mapping and scraping. Nevertheless, existing scraping solutions are based on ad-hoc solutions complemented with graphical interfaces for speeding up the scraper development. This article proposes a generic framework for web scraping based on semantic technologies. This framework is structured in three levels: scraping services, semantic scraping model and syntactic scraping. The first level provides an interface to generic applications or intelligent agents for gathering information from the web at a high level. The second level defines a semantic RDF model of the scraping process, in order to provide a declarative approach to the scraping task. Finally, the third level provides an implementation of the RDF scraping model for specific technologies. The work has been validated in a scenario that illustrates its application to mashup technologies

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Our research explores the possibility of categorizing webpages and webpage genre by structure or layout. Based on our results, we believe that webpage structure could play an important role, along with textual and visual keywords, in webpage categorization and searching.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper presents a new approach to web browsing in situ- ations where the user can only provide the device with a sin- gle input command device (switch). Switches have been de- veloped for example for people with locked-in syndrome and are used in combination with scanning to navigate virtual keyboards and desktop interfaces. Our proposed approach leverages the hierarchical structure of webpages to operate a multi-level scan of actionable elements of webpages (links or form elements). As there are a few methods already exist- ing to facilitate browsing under these conditions, we present a theoretical usability evaluation of our approach in com- parison to the existing ones, which takes into account the average time taken to reach any part of a web page (such as a link or a form) but also the number of clicks necessary to reach the goal. We argue that these factors contribute together to usability. In addition, we propose that our ap- proach presents additional usability benefits.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Introduction Research highlights patients with dual diagnoses of type 2 diabetes and acute coronary syndrome (ACS) have higher readmission rates and poorer health outcomes than patients with singular chronic conditions. Despite this, there is a lack of education programs targeted for this dual diagnosis population to improve self-management and decrease negative health outcomes. There is evidence to suggest that internet based interventions may improve health outcomes for patients with singular chronic conditions, however there is a need to develop an evidence base for ACS patients with comorbid diabetes. There is a growing awareness of the importance of a participatory model in developing effective online interventions. That is, internet interventions are more effective if end users’ perceptions of the intervention are incorporated in their final development prior to testing in large scale trials. Objectives This study investigated patients’ perspectives of the web-based intervention designed to promote self-management of the dual conditions in order to refine the intervention prior to clinical trial evaluation. Methods An interpretive approach with thematic analysis was used to obtain deeper understanding regarding participants’ experience when using web-application interventions for patients with ACS and type 2 diabetes. Semi-structured interviews were undertaken on a purposive sample of 30 patients meeting strict inclusion and exclusion criteria to obtain their perspectives on the program. Results Preliminary results indicate patients with dual diagnoses express more complex needs than those with a singular condition. Participants express a positive experience with the proposed internet intervention and emerging themes include that the web page is seen as easy to use and comforting as a support, in that patients know they are not alone. Further results will be reported as they become available. Conclusion The results indicate potential for patient acceptability of the newly developed internet intervention for patients with ACS and comorbid diabetes. Incorporation of patient perspectives into the final development of the intervention is likely to maximise successful outcomes of any future trials that utilise this intervention. Future quantitative evaluation of the effectiveness of the intervention is being planned.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We present an empirical evaluation and comparison of two content extraction methods in HTML: absolute XPath expressions and relative XPath expressions. We argue that the relative XPath expressions, although not widely used, should be used in preference to absolute XPath expressions in extracting content from human-created Web documents. Evaluation of robustness covers four thousand queries executed on several hundred webpages. We show that in referencing parts of real world dynamic HTML documents, relative XPath expressions are on average significantly more robust than absolute XPath ones.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper, we first describe a framework to model the sponsored search auction on the web as a mechanism design problem. Using this framework, we describe two well-known mechanisms for sponsored search auction-Generalized Second Price (GSP) and Vickrey-Clarke-Groves (VCG). We then derive a new mechanism for sponsored search auction which we call optimal (OPT) mechanism. The OPT mechanism maximizes the search engine's expected revenue, while achieving Bayesian incentive compatibility and individual rationality of the advertisers. We then undertake a detailed comparative study of the mechanisms GSP, VCG, and OPT. We compute and compare the expected revenue earned by the search engine under the three mechanisms when the advertisers are symmetric and some special conditions are satisfied. We also compare the three mechanisms in terms of incentive compatibility, individual rationality, and computational complexity. Note to Practitioners-The advertiser-supported web site is one of the successful business models in the emerging web landscape. When an Internet user enters a keyword (i.e., a search phrase) into a search engine, the user gets back a page with results, containing the links most relevant to the query and also sponsored links, (also called paid advertisement links). When a sponsored link is clicked, the user is directed to the corresponding advertiser's web page. The advertiser pays the search engine in some appropriate manner for sending the user to its web page. Against every search performed by any user on any keyword, the search engine faces the problem of matching a set of advertisers to the sponsored slots. In addition, the search engine also needs to decide on a price to be charged to each advertiser. Due to increasing demands for Internet advertising space, most search engines currently use auction mechanisms for this purpose. These are called sponsored search auctions. A significant percentage of the revenue of Internet giants such as Google, Yahoo!, MSN, etc., comes from sponsored search auctions. In this paper, we study two auction mechanisms, GSP and VCG, which are quite popular in the sponsored auction context, and pursue the objective of designing a mechanism that is superior to these two mechanisms. In particular, we propose a new mechanism which we call the OPT mechanism. This mechanism maximizes the search engine's expected revenue subject to achieving Bayesian incentive compatibility and individual rationality. Bayesian incentive compatibility guarantees that it is optimal for each advertiser to bid his/her true value provided that all other agents also bid their respective true values. Individual rationality ensures that the agents participate voluntarily in the auction since they are assured of gaining a non-negative payoff by doing so.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Impreso por la Diputación Foral de Álava, D.L. VI-430/99.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

[ES] Es obligatorio para las empresas cotizadas que cuenten con una página web para atender el derecho de información de los accionistas, y donde difundir información relevante y obligatoria. Por ello, se creó la normativa necesaria para regular tanto la información mínima que las empresas debían aportar en sus webs, como los requisitos técnicos y jurídicos que dichas páginas debían poseer. No obstante, todas las empresas no presentan la misma información en sus webs, ya que muchas no se limitan a presentar solo la exigida por ley, y ésta puede ser más o menos útil para los usuarios. Por eso, es imprescindible poder conocer de alguna forma la transparencia que las empresas muestran a los usuarios en sus páginas web y en particular a los accionistas, si existe asimetría informativa o no, y saber cuál es el grado de calidad de la información presentada, así como la confianza que se puede tener en la misma.