18 resultados para Web, Search Engine, Overlap


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Search engine optimization & marketing is a set of processes widely used on websites to improve search engine rankings which generate quality web traffic and increase ROI. Content is the most important part of any website. CMS web development is now become very essential for most of organizations and online businesses to develop their online system and websites. Every online business using a CMS wants to get users (customers) to make profit and ROI. This thesis comprises a brief study of existing SEO methods, tools and techniques and how they can be implemented to optimize a content base website. In results, the study provides recommendations about how to use SEO methods; tools and techniques to optimize CMS based websites on major search engines. This study compares popular CMS systems like Drupal, WordPress and Joomla SEO features and how implementing SEO can be improved on these CMS systems. Having knowledge of search engine indexing and search engine working is essential for a successful SEO campaign. This work is a complete guideline for web developers or SEO experts who want to optimize a CMS based website on all major search engines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study is dedicated to search engine marketing (SEM). It aims for developing a business model of SEM firms and to provide explicit research of trustworthy practices of virtual marketing companies. Optimization is a general term that represents a variety of techniques and methods of the web pages promotion. The research addresses optimization as a business activity, and it explains its role for the online marketing. Additionally, it highlights issues of unethical techniques utilization by marketers which created relatively negative attitude to them on the Internet environment. Literature insight combines in the one place both technical and economical scientific findings in order to highlight technological and business attributes incorporated in SEM activities. Empirical data regarding search marketers was collected via e-mail questionnaires. 4 representatives of SEM companies were engaged in this study to accomplish the business model design. Additionally, the fifth respondent was a representative of the search engine portal, who provided insight on relations between search engines and marketers. Obtained information of the respondents was processed qualitatively. Movement of commercial organizations to the online market increases demand on promotional programs. SEM is the largest part of online marketing, and it is a prerogative of search engines portals. However, skilled users, or marketers, are able to implement long-term marketing programs by utilizing web page optimization techniques, key word consultancy or content optimization to increase web site visibility to search engines and, therefore, user’s attention to the customer pages. SEM firms are related to small knowledge-intensive businesses. On the basis of data analysis the business model was constructed. The SEM model includes generalized constructs, although they represent a wider amount of operational aspects. Constructing blocks of the model includes fundamental parts of SEM commercial activity: value creation, customer, infrastructure and financial segments. Also, approaches were provided on company’s differentiation and competitive advantages evaluation. It is assumed that search marketers should apply further attempts to differentiate own business out of the large number of similar service providing companies. Findings indicate that SEM companies are interested in the increasing their trustworthiness and the reputation building. Future of the search marketing is directly depending on search engines development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primary objective of this thesis is to assess how the backlink portfolio structure and off site Search Engine Optimisation (SEO) elements influence ranking of UK based online nursery shops. The growth of the internet use demanded significant effort from companies to optimize and increase their online presence in order to cope with the increasing online competition. The new e-Commerce technology - called Search Engine Optimisation - has been developed that helped increase website visibility of companies. The SEO process involves on site elements (i.e. changing the parameters of the company's website such as keywords, title tags and meta descriptions) and off site elements (link building and social media marketing activity). Link Building is based on several steps of marketing planning including keyword research and competitor analysis. The underlying goal of keyword research is to understand the targeted market through identifying relevant keyword queries that are used by targeted costumer group. In the analysis, three types (geographic, field and company’s strategy related) and seven sources of keywords has been identified and used as a base of analysis. Following the determination of the most popular keywords, allinanchor and allintitle search has been conducted and the first ten results of the searches have been collected to identify the companies with the most significant web presence among the nursery shops. Finally, Link Profiling has been performed where the essential goal was to understand to what extent other companies' link structure is different that the base company's backlinks. Significant difference has been found that distinguished the top three companies ranking in the allinanchor and allintitle search. The top three companies, „Mothercare”, „Mamas and Papas” and „Kiddicare” maintained significantly better metrics regarding domain and page authority on the main landing pages, the average number of outbound links for link portfolio metric and in number of backlinks. These companies also ranked among the highest in page authority distribution and followed external linking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Value of online business has grown to over one trillion USD. This thesis is about search engine optimization, which focus is to increase search engine rankings. Search engine optimization is an important branch of online marketing because the first page of search engine results is generating majority of the search traffic. Current articles about search engine optimization and Google are indicating that with the proper use of quality content, there is potential to improve search engine rankings. However, the existing search engine optimization literature is not noticing content at a sufficient level. To decrease that difference, the content-centered method for search engine optimization is constructed, and content in search engine optimization is studied. This content-centered method consists of three search engine optimization tactics: 1) content, 2) keywords, and 3) links. Two propositions were used for testing these tactics in a real business environment and results are suggesting that the content-centered method is improving search engine rankings. Search engine optimization is constantly changing because Google is adjusting its search algorithm regularly. Still, some long-term trends can be recognized. Google has said that content is growing its importance as a ranking factor in the future. The content-centered method is taking advance of this new trend in search engine optimization to be relevant for years to come.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Value of online business has grown to over one trillion USD. This thesis is about search engine optimization, which focus is to increase search engine rankings. Search engine optimization is an important branch of online marketing because the first page of search engine results is generating majority of the search traffic. Current articles about search engine optimization and Google are indicating that with the proper use of quality content, there is potential to improve search engine rankings. However, the existing search engine optimization literature is not noticing content at a sufficient level. To decrease that difference, the content-centered method for search engine optimization is constructed, and content in search engine optimization is studied. This content-centered method consists of three search engine optimization tactics: 1) content, 2) keywords, and 3) links. Two propositions were used for testing these tactics in a real business environment and results are suggesting that the content-centered method is improving search engine rankings. Search engine optimization is constantly changing because Google is adjusting its search algorithm regularly. Still, some long-term trends can be recognized. Google has said that content is growing its importance as a ranking factor in the future. The content-centered method is taking advance of this new trend in search engine optimization to be relevant for years to come.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Yhä useampi etsii nykyään tietoa tuotteista ja palveluista internetin kautta. Vastapainoisesti lähes jokainen yritys käyttää internetsivujaan markkinointikanavana. Mietittäessä markkinoinnin peruskysymyksiä kuten kohdesegmentin saavuttamista tai kampanjan tuottoastetta ei vastausta usein osaa internetsivujen osalta antaa niin markkinointiosasto kuin IT-osastokaan. Hakukoneoptimointi on yksi hakukonemarkkinoinnin muoto, jonka avulla internetsivujen saavutettavuutta voidaan parantaa. Kehityksen toteamiseksi on oltava mittareita, joina internetsivuilla voidaan käyttää internetsivuille tarkoitettuja kävijäseurantaohjelmistoja. Tässä työssä käsitellään hakukoneoptimointia ja sen mahdollisuuksia parantaa sivustojen näkyvyyttä internetin hakukoneissa. Hakukoneoptimoinnilla tarkoitetaan sivustojen teknisen toteutuksen muokkaamista hakukoneystävälliseksi ja sisällön muokkaamista niin, että sivustotsijoittuvat halutuin hakusanoin hakutulosten kärkipäähän. Onnistumisen mittaamiseksi työssä perehdytään kävijäseurannan mahdollisuuksiin ja toteutukseen. Työn tavoitteena oli tuoda Primesoft Oy:lle riittävä tietotaito hakukoneoptimoinnista, toteuttaa hakukoneoptimointipalvelu ja muokata yrityksen ohjelmistot hakukoneoptimointia tukeviksi. Työn tavoitteet saavutettiin pääosin ja tutustuminen hakukoneoptimointiin avasi portin koko internetmarkkinoinnin maailmaan. Palvelun toimivuutta testattiin Primesoftin omilla sivuilla ja tulokset osoittautuivat varsin rohkaiseviksi. Jatkossa hakukoneoptimointia voidaan tarjota palveluna asiakkaille.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Yandex is the dominant search engine in Russia, followed by the world leader Google. This study focuses on the performance differences between the two in search advertising in the context of tourism, by running two identical campaigns and measuring the KPI’s, such as CPA (cost-per-action), on both campaigns. Search engine advertising is a new and fast changing form of advertising, which should be studied frequently in order to keep up with the changes. Research was done as an experimental study in cooperation with a Finnish tourism company and the data is gathered from the clickstream and not from questionnaires, which is recommended method by the literature. The results of the study suggests that Yandex.Direct performed better in the selected niche and that the individual campaign planning for Yandex.Direct and Google AdWords is an important part of the optimization of search advertising in Russia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tässä diplomityössä tutkitaan, miten verkkokaupan kävijävirran käyttäytymistä analysoimalla voidaan tehdä perusteltuja, tarkoituksenmukaisiin nimikkeisiin ja niiden parametreihin kohdistuvia päätöksiä tilanteessa, jossa laajamittaisemmat historiatiedot toteutuneesta myynnistä puuttuvat. Teoriakatsauksen perusteella muodostettiin ratkaisumalli, joka perustuu potentiaalisten kysyntäajurien muodostamiseen ja testaamiseen. Testisarjan perusteella valittavaa ajuria käytetään estimoimaan nimikkeiden kysyntää, jolloin sitä voidaan käyttää toteutuneen myynnin sijasta esimerkiksi Pareto-analyysissä. Näin huomio on mahdollista keskittää rajattuun määrään merkitykseltään suuria nimikkeitä ja niiden yksityiskohtaisiin parametreihin, joilla on merkitystä asiakkaan ostopäätöstilanteissa. Lisäksi voidaan tunnistaa nimikkeitä, joiden ongelmana on joko huono verkkonäkyvyys tai yhteensopimattomuus asiakastarpeiden kanssa. Ajurien testaamisperiaatteena käytetään kertymäfunktioiden yhdenmukaisuustarkastelua, joka rakentuu kolmesta peräkkäisestä vaiheesta; visuaalisesta tarkastelusta, kahden otoksen 2-suuntaisesta Kolmogorov-Smirnov-yhteensopivuustestistä ja Pearsonin korrelaatiotestistä. Mallia ja sen avulla tuotettua kysynnän ajuria testattiin veneilyalan kuluttaja-asiakkaille suunnatussa verkkokaupassa, jossa sillä tunnistettiin Pareto-jakauman alkupäästä runsaasti nimikkeitä, joiden parametreissa oli myynnin kannalta epäedullisia tekijöitä. Jakauman toisessa päässä tunnistettiin satoja nimikkeitä, joiden ongelmana on ilmeisesti joko huono verkkonäkyvyys tai nimikkeiden yhteensopimattomuus asiakastarpeiden kanssa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The number of digital images has been increasing exponentially in the last few years. People have problems managing their image collections and finding a specific image. An automatic image categorization system could help them to manage images and find specific images. In this thesis, an unsupervised visual object categorization system was implemented to categorize a set of unknown images. The system is unsupervised, and hence, it does not need known images to train the system which needs to be manually obtained. Therefore, the number of possible categories and images can be huge. The system implemented in the thesis extracts local features from the images. These local features are used to build a codebook. The local features and the codebook are then used to generate a feature vector for an image. Images are categorized based on the feature vectors. The system is able to categorize any given set of images based on the visual appearance of the images. Images that have similar image regions are grouped together in the same category. Thus, for example, images which contain cars are assigned to the same cluster. The unsupervised visual object categorization system can be used in many situations, e.g., in an Internet search engine. The system can categorize images for a user, and the user can then easily find a specific type of image.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The general purpose of the thesis was to describe and explain the particularities of inbound marketing methods and the key advantages of those methods. Inbound marketing can be narrowed down to a set of marketing strategies and techniques focused on pulling prospects towards a business and its products on the Internet by producing useful and relevant content to prospects. The main inbound marketing methods and channels were identified as blogging, content publishing, search engine optimization and social media. The best way to utilise these methods is producing great content that should cover subjects that interest the target group, which is usually a composition of buyers, existing customers and influencers, such as analysts and media The study revealed increase in Lainaaja.fi traffic and referral traffic sources that was firmly confirmed as statistically significant, while number of backlinks and SERP placement were clearly positively correlated, but not statistically significant. The number of new registered users along with new loan applicants and deposits did not show correlation with increased content producing. The conclusion of the study shows inbound marketing campaign clearly increasing website traffic and plausible help on getting better search engine results compared to control period. Implications are clear; inbound marketing is an activity that every business should consider implementing. But just producing content online is not enough; equal amount of work should be put into turning the visitors into customers. Further studies are recommended on using inbound marketing combined with monitoring of landing pages and conversion optimization to incoming visitors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this thesis is to propose a novel control method for teleoperated electrohydraulic servo systems that implements a reliable haptic sense between the human and manipulator interaction, and an ideal position control between the manipulator and the task environment interaction. The proposed method has the characteristics of a universal technique independent of the actual control algorithm and it can be applied with other suitable control methods as a real-time control strategy. The motivation to develop this control method is the necessity for a reliable real-time controller for teleoperated electrohydraulic servo systems that provides highly accurate position control based on joystick inputs with haptic capabilities. The contribution of the research is that the proposed control method combines a directed random search method and a real-time simulation to develop an intelligent controller in which each generation of parameters is tested on-line by the real-time simulator before being applied to the real process. The controller was evaluated on a hydraulic position servo system. The simulator of the hydraulic system was built based on Markov chain Monte Carlo (MCMC) method. A Particle Swarm Optimization algorithm combined with the foraging behavior of E. coli bacteria was utilized as the directed random search engine. The control strategy allows the operator to be plugged into the work environment dynamically and kinetically. This helps to ensure the system has haptic sense with high stability, without abstracting away the dynamics of the hydraulic system. The new control algorithm provides asymptotically exact tracking of both, the position and the contact force. In addition, this research proposes a novel method for re-calibration of multi-axis force/torque sensors. The method makes several improvements to traditional methods. It can be used without dismantling the sensor from its application and it requires smaller number of standard loads for calibration. It is also more cost efficient and faster in comparison to traditional calibration methods. The proposed method was developed in response to re-calibration issues with the force sensors utilized in teleoperated systems. The new approach aimed to avoid dismantling of the sensors from their applications for applying calibration. A major complication with many manipulators is the difficulty accessing them when they operate inside a non-accessible environment; especially if those environments are harsh; such as in radioactive areas. The proposed technique is based on design of experiment methodology. It has been successfully applied to different force/torque sensors and this research presents experimental validation of use of the calibration method with one of the force sensors which method has been applied to.