962 resultados para Web, Search Engine, Overlap
Resumo:
Conceptual Modelling approaches for the web need extensions to specify dynamic personalization properties in order to design more powerful web applications. Current approaches provide techniques to support dynamic personalization, usually focused on implementation details. This article presents an extension of the OO-H conceptual modeling approach to address the particulars associated with the design and specification of dynamic personalization. The main benefit is that this specification can be modified without recompile the rest of the application modules. We describe how conventional navigation and presentation diagrams are influenced by personalization properties. In order to model the variable part of the interface logic OO-H has a personalization architecture that leans on a rule engine. Rules are defined based on a User Model and a Reference Model.
Resumo:
Modern compilers present a great and ever increasing number of options which can modify the features and behavior of a compiled program. Many of these options are often wasted due to the required comprehensive knowledge about both the underlying architecture and the internal processes of the compiler. In this context, it is usual, not having a single design goal but a more complex set of objectives. In addition, the dependencies between different goals are difficult to be a priori inferred. This paper proposes a strategy for tuning the compilation of any given application. This is accomplished by using an automatic variation of the compilation options by means of multi-objective optimization and evolutionary computation commanded by the NSGA-II algorithm. This allows finding compilation options that simultaneously optimize different objectives. The advantages of our proposal are illustrated by means of a case study based on the well-known Apache web server. Our strategy has demonstrated an ability to find improvements up to 7.5% and up to 27% in context switches and L2 cache misses, respectively, and also discovers the most important bottlenecks involved in the application performance.
Resumo:
Currently there is no structured data standard for representing elements commonly found in transmedia fictional universes. There are websites dedicated to individual universes, however, information found on these sites separates the various formats into books, movies, comics, etc.; concentrate on only the bibliographic aspects of the material; and are only full-text searchable. We have created an ontological model that will allow researchers, fans, brand managers, and creators to search for and retrieve the information contained in these worlds based on how they are structured. We conducted a domain analysis and user studies based on the contents of Harry Potter, Lord of the Rings, the Marvel Universe, and Star Wars in order to build a new model using the Ontology Web Language (OWL) and an artificial intelligence reasoning engine. This model can infer connections between characters, elements of power, items, places, events, etc. This model will facilitate better search and retrieval of the information contained within these vast story universes for all users interested in them. The result of this project is and OWL ontology that is intuitive for users; can be used by AI systems; and has been updated to reflect real user needs based on user research.
Resumo:
Spatial data has now been used extensively in the Web environment, providing online customized maps and supporting map-based applications. The full potential of Web-based spatial applications, however, has yet to be achieved due to performance issues related to the large sizes and high complexity of spatial data. In this paper, we introduce a multiresolution approach to spatial data management and query processing such that the database server can choose spatial data at the right resolution level for different Web applications. One highly desirable property of the proposed approach is that the server-side processing cost and network traffic can be reduced when the level of resolution required by applications are low. Another advantage is that our approach pushes complex multiresolution structures and algorithms into the spatial database engine. That is, the developer of spatial Web applications needs not to be concerned with such complexity. This paper explains the basic idea, technical feasibility and applications of multiresolution spatial databases.
Resumo:
Many queries sent to search engines refer to specific locations in the world. Location-based queries try to find local services and facilities around the user’s environment or in a particular area. This paper reviews the specifications of geospatial queries and discusses the similarities and differences between location-based queries and other queries. We introduce nine patterns for location-based queries containing either a service name alone or a service name accompanied by a location name. Our survey indicates that at least 22% of the Web queries have a geospatial dimension and most of these can be considered as location-based queries. We propose that location-based queries should be treated different from general queries to produce more relevant results.
Resumo:
In multimedia retrieval, a query is typically interactively refined towards the ‘optimal’ answers by exploiting user feedback. However, in existing work, in each iteration, the refined query is re-evaluated. This is not only inefficient but fails to exploit the answers that may be common between iterations. In this paper, we introduce a new approach called SaveRF (Save random accesses in Relevance Feedback) for iterative relevance feedback search. SaveRF predicts the potential candidates for the next iteration and maintains this small set for efficient sequential scan. By doing so, repeated candidate accesses can be saved, hence reducing the number of random accesses. In addition, efficient scan on the overlap before the search starts also tightens the search space with smaller pruning radius. We implemented SaveRF and our experimental study on real life data sets show that it can reduce the I/O cost significantly.
Resumo:
This thesis describes a novel connectionist machine utilizing induction by a Hilbert hypercube representation. This representation offers a number of distinct advantages which are described. We construct a theoretical and practical learning machine which lies in an area of overlap between three disciplines - neural nets, machine learning and knowledge acquisition - hence it is refered to as a "coalesced" machine. To this unifying aspect is added the various advantages of its orthogonal lattice structure as against less structured nets. We discuss the case for such a fundamental and low level empirical learning tool and the assumptions behind the machine are clearly outlined. Our theory of an orthogonal lattice structure the Hilbert hypercube of an n-dimensional space using a complemented distributed lattice as a basis for supervised learning is derived from first principles on clearly laid out scientific principles. The resulting "subhypercube theory" was implemented in a development machine which was then used to test the theoretical predictions again under strict scientific guidelines. The scope, advantages and limitations of this machine were tested in a series of experiments. Novel and seminal properties of the machine include: the "metrical", deterministic and global nature of its search; complete convergence invariably producing minimum polynomial solutions for both disjuncts and conjuncts even with moderate levels of noise present; a learning engine which is mathematically analysable in depth based upon the "complexity range" of the function concerned; a strong bias towards the simplest possible globally (rather than locally) derived "balanced" explanation of the data; the ability to cope with variables in the network; and new ways of reducing the exponential explosion. Performance issues were addressed and comparative studies with other learning machines indicates that our novel approach has definite value and should be further researched.
Towards a web-based progressive handwriting recognition environment for mathematical problem solving
Resumo:
The emergence of pen-based mobile devices such as PDAs and tablet PCs provides a new way to input mathematical expressions to computer by using handwriting which is much more natural and efficient for entering mathematics. This paper proposes a web-based handwriting mathematics system, called WebMath, for supporting mathematical problem solving. The proposed WebMath system is based on client-server architecture. It comprises four major components: a standard web server, handwriting mathematical expression editor, computation engine and web browser with Ajax-based communicator. The handwriting mathematical expression editor adopts a progressive recognition approach for dynamic recognition of handwritten mathematical expressions. The computation engine supports mathematical functions such as algebraic simplification and factorization, and integration and differentiation. The web browser provides a user-friendly interface for accessing the system using advanced Ajax-based communication. In this paper, we describe the different components of the WebMath system and its performance analysis.
Resumo:
Evaluations of semantic search systems are generally small scale and ad hoc due to the lack of appropriate resources such as test collections, agreed performance criteria and independent judgements of performance. By analysing our work in building and evaluating semantic tools over the last five years, we conclude that the growth of the semantic web led to an improvement in the available resources and the consequent robustness of performance assessments. We propose two directions for continuing evaluation work: the development of extensible evaluation benchmarks and the use of logging parameters for evaluating individual components of search systems.
Resumo:
In this paper we propose algorithms for combining and ranking answers from distributed heterogeneous data sources in the context of a multi-ontology Question Answering task. Our proposal includes a merging algorithm that aggregates, combines and filters ontology-based search results and three different ranking algorithms that sort the final answers according to different criteria such as popularity, confidence and semantic interpretation of results. An experimental evaluation on a large scale corpus indicates improvements in the quality of the search results with respect to a scenario where the merging and ranking algorithms were not applied. These collective methods for merging and ranking allow to answer questions that are distributed across ontologies, while at the same time, they can filter irrelevant answers, fuse similar answers together, and elicit the most accurate answer(s) to a question.
Resumo:
This paper presents our Semantic Web portal infrastructure, which focuses on how to enhance knowledge access in traditional Web portals by gathering and exploiting semantic metadata. Special attention is paid to three important issues that affect the performance of knowledge access: i) high quality metadata acquisition, which concerns how to ensure high quality while gathering semantic metadata from heterogeneous data sources; ii) semantic search, which addresses how to meet the information querying needs of ordinary end users who are not necessarily familiar with the problem domain or the supported query language; and iii) semantic browsing, which concerns how to help users understand and explore the problem domain.
Resumo:
The goal of semantic search is to improve on traditional search methods by exploiting the semantic metadata. In this paper, we argue that supporting iterative and exploratory search modes is important to the usability of all search systems. We also identify the types of semantic queries the users need to make, the issues concerning the search environment and the problems that are intrinsic to semantic search in particular. We then review the four modes of user interaction in existing semantic search systems, namely keyword-based, form-based, view-based and natural language-based systems. Future development should focus on multimodal search systems, which exploit the advantages of more than one mode of interaction, and on developing the search systems that can search heterogeneous semantic metadata on the open semantic Web.
Resumo:
The Protein pKa Database (PPD) v1.0 provides a compendium of protein residue-specific ionization equilibria (pKa values), as collated from the primary literature, in the form of a web-accessible postgreSQL relational database. Ionizable residues play key roles in the molecular mechanisms that underlie many biological phenomena, including protein folding and enzyme catalysis. The PPD serves as a general protein pKa archive and as a source of data that allows for the development and improvement of pKa prediction systems. The database is accessed through an HTML interface, which offers two fast, efficient search methods: an amino acid-based query and a Basic Local Alignment Search Tool search. Entries also give details of experimental techniques and links to other key databases, such as National Center for Biotechnology Information and the Protein Data Bank, providing the user with considerable background information.
Resumo:
In this paper the technique of shorter route determination of fire engine to the fire place on time minimization criterion with the use of evolutionary modeling is offered. The algorithm of its realization on the base of complete and optimized space of search of possible decisions is explored. The aspects of goal function forming and program realization of method having a special purpose are considered. Experimental verification is executed and the results of comparative analysis with the expert conclusions are considered.
Resumo:
The paper has been presented at the International Conference Pioneers of Bulgarian Mathematics, Dedicated to Nikola Obreshko and Lubomir Tschakalo , So a, July, 2006.