17 resultados para SEO (Search Engine Optimization)

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

For a submitted query to multiple search engines finding relevant results is an important task. This paper formulates the problem of aggregation and ranking of multiple search engines results in the form of a minimax linear programming model. Besides the novel application, this study detects the most relevant information among a return set of ranked lists of documents retrieved by distinct search engines. Furthermore, two numerical examples aree used to illustrate the usefulness of the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Existing semantic search tools have been primarily designed to enhance the performance of traditional search technologies but with little support for ordinary end users who are not necessarily familiar with domain specific semantic data, ontologies, or SQL-like query languages. This paper presents SemSearch, a search engine, which pays special attention to this issue by providing several means to hide the complexity of semantic search from end users and thus make it easy to use and effective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work contributes to the development of search engines that self-adapt their size in response to fluctuations in workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computational resources to or from the engine. In this paper, we focus on the problem of regrouping the metric-space search index when the number of virtual machines used to run the search engine is modified to reflect changes in workload. We propose an algorithm for incrementally adjusting the index to fit the varying number of virtual machines. We tested its performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud, while calibrating the results to compensate for the performance fluctuations of the platform. Our experiments show that, when compared with computing the index from scratch, the incremental algorithm speeds up the index computation 2–10 times while maintaining a similar search performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When a query is passed to multiple search engines, each search engine returns a ranked list of documents. Researchers have demonstrated that combining results, in the form of a "metasearch engine", produces a significant improvement in coverage and search effectiveness. This paper proposes a linear programming mathematical model for optimizing the ranked list result of a given group of Web search engines for an issued query. An application with a numerical illustration shows the advantages of the proposed method. © 2011 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: A case study is presented concerning a gamified awards system designed to encourage software users to explore a suite of tools, and to share their expertise level in profile pages. Majestic is a high-tech business based in the West Midlands (UK) w hich offers a Link Intelligence database using a Software as a Service (SaaS) business model. Customers leverage the database for tasks including Search Engine Optimisation (SEO) by using a suite of web-based tools. Getting to know all the tools and how they can be deployed to good effect represents a considerable learning challenge, and Majestic were aware that. Design/methodology/approach: We present the development of Majestic Awards as a case study highlighting the most important design decisions. Then we reflect on the development process as an example of innovation adoption, thereby identifying resources and cu ltura l factors which were critical in ensuring the success of the project. Findings: The gamified awards system makes learning the tools an enjoyable, explorative experience. Success factors included identifying a clear business goal, the process/ project f it, senior management buy in, and identifying the knowledge and resources to resolve t echnical issues. Originality/value: Prior to gamification of the system, only the most expert users regu larly utilized all the tools. The user base is now more knowl edgable about the system and some users choose to use the system to publicize their expertise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Representing knowledge using domain ontologies has shown to be a useful mechanism and format for managing and exchanging information. Due to the difficulty and cost of building ontologies, a number of ontology libraries and search engines are coming to existence to facilitate reusing such knowledge structures. The need for ontology ranking techniques is becoming crucial as the number of ontologies available for reuse is continuing to grow. In this paper we present AKTiveRank, a prototype system for ranking ontologies based on the analysis of their structures. We describe the metrics used in the ranking system and present an experiment on ranking ontologies returned by a popular search engine for an example query.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In view of the need to provide tools to facilitate the re-use of existing knowledge structures such as ontologies, we present in this paper a system, AKTiveRank, for the ranking of ontologies. AKTiveRank uses as input the search terms provided by a knowledge engineer and, using the output of an ontology search engine, ranks the ontologies. We apply a number of metrics in an attempt to investigate their appropriateness for ranking ontologies, and compare the results with a questionnaire-based human study. Our results show that AKTiveRank will have great utility although there is potential for improvement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In practical term any result obtained using an ordered weighted averaging (OWA) operator heavily depends upon the method to determine the weighting vector. Several approaches for obtaining the associated weights have been suggested in the literature, in which none of them took into account the preference of alternatives. This paper presents a method for determining the OWA weights when the preferences of alternatives across all the criteria are considered. An example is given to illustrate this method and an application in internet search engine shows the use of this new OWA operator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Geospatial data have become a crucial input for the scientific community for understanding the environment and developing environmental management policies. The Global Earth Observation System of Systems (GEOSS) Clearinghouse is a catalogue and search engine that provides access to the Earth Observation metadata. However, metadata are often not easily understood by users, especially when presented in ISO XML encoding. Data quality included in the metadata is basic for users to select datasets suitable for them. This work aims to help users to understand the quality information held in metadata records and to provide the results to geospatial users in an understandable and comparable way. Thus, we have developed an enhanced tool (Rubric-Q) for visually assessing the metadata quality information and quantifying the degree of metadata population. Rubric-Q is an extension of a previous NOAA Rubric tool used as a metadata training and improvement instrument. The paper also presents a thorough assessment of the quality information by applying the Rubric-Q to all dataset metadata records available in the GEOSS Clearinghouse. The results reveal that just 8.7% of the datasets have some quality element described in the metadata, 63.4% have some lineage element documented, and merely 1.2% has some usage element described. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Information seeking is an important coping mechanism for dealing with chronic illness. Despite a growing number of mental health websites, there is little understanding of how patients with bipolar disorder use the Internet to seek information. Methods: A 39 question, paper-based, anonymous survey, translated into 12 languages, was completed by 1222 patients in 17 countries as a convenience sample between March 2014 and January 2016. All patients had a diagnosis of bipolar disorder from a psychiatrist. Data were analyzed using descriptive statistics and generalized estimating equations to account for correlated data. Results: 976 (81 % of 1212 valid responses) of the patients used the Internet, and of these 750 (77 %) looked for information on bipolar disorder. When looking online for information, 89 % used a computer rather than a smartphone, and 79 % started with a general search engine. The primary reasons for searching were drug side effects (51 %), to learn anonymously (43 %), and for help coping (39 %). About 1/3 rated their search skills as expert, and 2/3 as basic or intermediate. 59 % preferred a website on mental illness and 33 % preferred Wikipedia. Only 20 % read or participated in online support groups. Most patients (62 %) searched a couple times a year. Online information seeking helped about 2/3 to cope (41 % of the entire sample). About 2/3 did not discuss Internet findings with their doctor. Conclusion: Online information seeking helps many patients to cope although alternative information sources remain important. Most patients do not discuss Internet findings with their doctor, and concern remains about the quality of online information especially related to prescription drugs. Patients may not rate search skills accurately, and may not understand limitations of online privacy. More patient education about online information searching is needed and physicians should recommend a few high quality websites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When composing stock portfolios, managers frequently choose among hundreds of stocks. The stocks' risk properties are analyzed with statistical tools, and managers try to combine these to meet the investors' risk profiles. A recently developed tool for performing such optimization is called full-scale optimization (FSO). This methodology is very flexible for investor preferences, but because of computational limitations it has until now been infeasible to use when many stocks are considered. We apply the artificial intelligence technique of differential evolution to solve FSO-type stock selection problems of 97 assets. Differential evolution finds the optimal solutions by self-learning from randomly drawn candidate solutions. We show that this search technique makes large scale problem computationally feasible and that the solutions retrieved are stable. The study also gives further merit to the FSO technique, as it shows that the solutions suit investor risk profiles better than portfolios retrieved from traditional methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The non-linear programming algorithms for the minimum weight design of structural frames are presented in this thesis. The first, which is applied to rigidly jointed and pin jointed plane frames subject to deflexion constraints, consists of a search in a feasible design space. Successive trial designs are developed so that the feasibility and the optimality of the designs are improved simultaneously. It is found that this method is restricted lo the design of structures with few unknown variables. The second non-linear programming algorithm is presented .in a general form. This consists of two types of search, one improving feasibility and the other optimality. The method speeds up the 'feasible direction' approach by obtaining a constant weight direction vector that is influenced by dominating constraints. For pin jointed plane and space frames this method is used to obtain a 'minimum weight' design which satisfies restrictions on stresses and deflexions. The matrix force method enables the design requirements to be expressed in a general form and the design problem is automatically formulated within the computer. Examples are given to explain the method and the design criteria are extended to include member buckling. Fundamental theorems are proposed and proved to confirm that structures are inter-related. These theorems are applicable to linear elastic structures and facilitate the prediction of the behaviour of one structure from the results of analysing another, more general, or related structure. It becomes possible to evaluate the significance of each member in the behaviour of a structure and the problem of minimum weight design is extended to include shape. A method is proposed to design structures of optimum shape with stress and deflexion limitations. Finally a detailed investigation is carried out into the design of structures to study the factors that influence their shape.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents a laser tracker position optimization code based on the tracker uncertainty model developed by the National Physical Laboratory (NPL). The code is able to find the optimal tracker positions for generic measurements involving one or a network of many trackers, and an arbitrary set of targets. The optimization is performed using pattern search or optionally, genetic algorithm (GA) or particle swarm optimization (PSO). Different objective function weightings for the uncertainties of individual points, distance uncertainties between point pairs, and the angular uncertainties between three points can be defined. Constraints for tracker position limits and minimum measurement distances have also been implemented. Furthermore, position optimization taking into account of lines-of-sight (LOS) within complex CAD geometry have also been demonstrated. The code is simple to use and can be a valuable measurement planning tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents a laser tracker position optimization code based on the tracker uncertainty model developed by the National Physical Laboratory (NPL). The code is able to find the optimal tracker positions for generic measurements involving one or a network of many trackers, and an arbitrary set of targets. The optimization is performed using pattern search or optionally, genetic algorithm (GA) or particle swarm optimization (PSO). Different objective function weightings for the uncertainties of individual points, distance uncertainties between point pairs, and the angular uncertainties between three points can be defined. Constraints for tracker position limits and minimum measurement distances have also been implemented. Furthermore, position optimization taking into account of lines-of-sight (LOS) within complex CAD geometry have also been demonstrated. The code is simple to use and can be a valuable measurement planning tool.