891 resultados para search frictions


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes Question Waves, an algorithm that can be applied to social search protocols, such as Asknext or Sixearch. In this model, the queries are propagated through the social network, with faster propagation through more trustable acquaintances. Question Waves uses local information to make decisions and obtain an answer ranking. With Question Waves, the answers that arrive first are the most likely to be relevant, and we computed the correlation of answer relevance with the order of arrival to demonstrate this result. We obtained correlations equivalent to the heuristics that use global knowledge, such as profile similarity among users or the expertise value of an agent. Because Question Waves is compatible with the social search protocol Asknext, it is possible to stop a search when enough relevant answers have been found; additionally, stopping the search early only introduces a minimal risk of not obtaining the best possible answer. Furthermore, Question Waves does not require a re-ranking algorithm because the results arrive sorted

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Open educational resources (OER) promise increased access, participation, quality, and relevance, in addition to cost reduction. These seemingly fantastic promises are based on the supposition that educators and learners will discover existing resources, improve them, and share the results, resulting in a virtuous cycle of improvement and re-use. By anecdotal metrics, existing web scale search is not working for OER. This situation impairs the cycle underlying the promise of OER, endangering long term growth and sustainability. While the scope of the problem is vast, targeted improvements in areas of curation, indexing, and data exchange can improve the situation, and create opportunities for further scale. I explore the way the system is currently inadequate, discuss areas for targeted improvement, and describe a prototype system built to test these ideas. I conclude with suggestions for further exploration and development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The vast majority of users don’t seek results beyond the second page offered by the search engine, so if a site fails to be among the top 20 (second page), it says that this page does not have good SEO and, therefore, is not visible to the user. The overall objective of this project is to conduct a study to discover the factors that determine (or not) the positioning of websites in a search engine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study is to investigate the consumer search behavior in high involvement purchases. The results of this research provide the descriptive analysis of the information search phase which is a part of the decision-making process. The study focuses on customer’s choice of the information sources, motivation behind it and different factors that influence the search behavior. Particular attention is paid to the purchase categorization and the differences in information search between products and services. The qualitative research method is chosen for this study. The data is gathered through ten theme interviews. Each participant of the interview describes his/her own search behavior in a product and a service case. The results indicate that consumer search behavior vary according to the purchase categorization, demographic, individual and situational factors. Moreover, the above-mentioned factors influence the purpose and position of the information search phase in a five-step decision making model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present text proposes a discussion on the concept of true friendship. The argument is grounded mostly on Aristotle's Nicomachean Ethics, Owen Flanagan's ethics as human ecology, and on contemporary authors' works about the Greek philosopher's concept of friendship. Given that human beings flourish through 1) exercising capacities, 2) being moral, and 3) having true friendships, difficulties to establish the level of trust required by true friendships turns the search itself (for them) morally valid.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The article is located at the Daily Sun's editorial section's subsection "Post-Log."

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Search engine optimization & marketing is a set of processes widely used on websites to improve search engine rankings which generate quality web traffic and increase ROI. Content is the most important part of any website. CMS web development is now become very essential for most of organizations and online businesses to develop their online system and websites. Every online business using a CMS wants to get users (customers) to make profit and ROI. This thesis comprises a brief study of existing SEO methods, tools and techniques and how they can be implemented to optimize a content base website. In results, the study provides recommendations about how to use SEO methods; tools and techniques to optimize CMS based websites on major search engines. This study compares popular CMS systems like Drupal, WordPress and Joomla SEO features and how implementing SEO can be improved on these CMS systems. Having knowledge of search engine indexing and search engine working is essential for a successful SEO campaign. This work is a complete guideline for web developers or SEO experts who want to optimize a CMS based website on all major search engines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study is dedicated to search engine marketing (SEM). It aims for developing a business model of SEM firms and to provide explicit research of trustworthy practices of virtual marketing companies. Optimization is a general term that represents a variety of techniques and methods of the web pages promotion. The research addresses optimization as a business activity, and it explains its role for the online marketing. Additionally, it highlights issues of unethical techniques utilization by marketers which created relatively negative attitude to them on the Internet environment. Literature insight combines in the one place both technical and economical scientific findings in order to highlight technological and business attributes incorporated in SEM activities. Empirical data regarding search marketers was collected via e-mail questionnaires. 4 representatives of SEM companies were engaged in this study to accomplish the business model design. Additionally, the fifth respondent was a representative of the search engine portal, who provided insight on relations between search engines and marketers. Obtained information of the respondents was processed qualitatively. Movement of commercial organizations to the online market increases demand on promotional programs. SEM is the largest part of online marketing, and it is a prerogative of search engines portals. However, skilled users, or marketers, are able to implement long-term marketing programs by utilizing web page optimization techniques, key word consultancy or content optimization to increase web site visibility to search engines and, therefore, user’s attention to the customer pages. SEM firms are related to small knowledge-intensive businesses. On the basis of data analysis the business model was constructed. The SEM model includes generalized constructs, although they represent a wider amount of operational aspects. Constructing blocks of the model includes fundamental parts of SEM commercial activity: value creation, customer, infrastructure and financial segments. Also, approaches were provided on company’s differentiation and competitive advantages evaluation. It is assumed that search marketers should apply further attempts to differentiate own business out of the large number of similar service providing companies. Findings indicate that SEM companies are interested in the increasing their trustworthiness and the reputation building. Future of the search marketing is directly depending on search engines development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The primary objective of this thesis is to assess how the backlink portfolio structure and off site Search Engine Optimisation (SEO) elements influence ranking of UK based online nursery shops. The growth of the internet use demanded significant effort from companies to optimize and increase their online presence in order to cope with the increasing online competition. The new e-Commerce technology - called Search Engine Optimisation - has been developed that helped increase website visibility of companies. The SEO process involves on site elements (i.e. changing the parameters of the company's website such as keywords, title tags and meta descriptions) and off site elements (link building and social media marketing activity). Link Building is based on several steps of marketing planning including keyword research and competitor analysis. The underlying goal of keyword research is to understand the targeted market through identifying relevant keyword queries that are used by targeted costumer group. In the analysis, three types (geographic, field and company’s strategy related) and seven sources of keywords has been identified and used as a base of analysis. Following the determination of the most popular keywords, allinanchor and allintitle search has been conducted and the first ten results of the searches have been collected to identify the companies with the most significant web presence among the nursery shops. Finally, Link Profiling has been performed where the essential goal was to understand to what extent other companies' link structure is different that the base company's backlinks. Significant difference has been found that distinguished the top three companies ranking in the allinanchor and allintitle search. The top three companies, „Mothercare”, „Mamas and Papas” and „Kiddicare” maintained significantly better metrics regarding domain and page authority on the main landing pages, the average number of outbound links for link portfolio metric and in number of backlinks. These companies also ranked among the highest in page authority distribution and followed external linking.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation explores the use of internal and external sources of knowledge in modern innovation processes. It builds on a framework that combines theories such as a behavioural theory of the firm, the evolutionary theory of economic change, and modern approaches to strategic management. It follows the recent increase in innovation research focusing on the firm-level examination of innovative activities instead of traditional industry-level determinants. The innovation process is seen as a problem- and slack- driven search process, which can take several directions in terms of organizational boundaries in the pursuit of new knowledge and other resources. It thus draws on recent models of technological change, according to which firms nowadays should build their innovative activities on both internal and external sources of innovation rather than relying solely on internal resources. Four different research questions are addressed, all of which are empirically investigated via a rich dataset covering Finnish innovators collected by Statistics Finland. Firstly, the study examines how the nature of problems shapes the direction of any search for new knowledge. In general it demonstrates that the nature of the problem does affect the direction of the search, although under resource constraints firms tend to use external rather than internal sources of knowledge. At the same time, it shows that those firms that are constrained in terms of finance seem to search both internally and externally. Secondly, the dissertation investigates the relationships between different kinds of internal and external sources of knowledge in an attempt to find out where firms should direct their search in order to exploit the potential of a distributed innovation process. The concept of complementarities is applied in this context. The third research question concerns how the use of external knowledge sources – openness to external knowledge – influences the financial performance of firms. Given the many advantages of openness presented in the current literature, the focus is on how it shapes profitability. The results reveal a curvilinear relationship between profitability and openness (taking an inverted U-shape), the implication being that it pays to be open up to a certain point, but being too open to external sources may be detrimental to financial performance. Finally, the dissertation addresses some challenges in CISbased innovation research that have received relatively little attention in prior studies. The general aim is to underline the fact that comprehensive understanding of the complex process of technological change requires the constant development of methodological approaches (in terms of data and measures, for example). All the empirical analyses included in the dissertation are based on the Finnish CIS (Finnish Innovation Survey 1998-2000).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bovine respiratory syncytial virus (BRSV) has been only sporadically identified as a causative agent of respiratory disease in Brazil. This contrasts with frequent reports of clinical and histopathological findings suggestive of BRSV-associated disease. In order to examine a possible involvement of BRSV in cases of calf pneumonia, a retrospective search was performed for BRSV antigens in histological specimens submitted to veterinary diagnostic services from the states of Rio Grande do Sul and Minas Gerais. Ten out of 41 cases examined (24.4%) were positive for BRSV antigens by immunohistochemistry (IPX). Eight of these cases (19.5%) were also positive by indirect immunofluorescence (IFA), and 31 cases (75.6%) were negative in both assays. In the lungs, BRSV antigens were predominantly observed in epithelial cells of bronchioles and less frequently found in alveoli. In one case, antigens were detected only in the epithelium of the alveolar septae. The presence of antigen-positive cells was largely restricted to epithelial cells of these airways. In two cases, positive staining was also observed in cells and cellular debris in the exudate within the pulmonary airways. The clinical cases positive for BRSV antigens were observed mainly in young animals (2 to 12 month-old) from dairy herds. The main microscopic changes included bronchointerstitial pneumonia characterized by thickening of alveolar septae adjacent to airways by mononuclear cell infiltrates, and the presence of alveolar syncytial giant cells. In summary, the results demonstrate the suitability of the immunodetection of viral antigens in routinely fixed tissue specimens as a diagnostic tool for BRSV infection. Moreover, the findings provide further evidence of the importance of BRSV as a respiratory pathogen of young cattle in southeastern and southern Brazil.