959 resultados para Search Engine
Resumo:
Web Services for Remote Portlets (WSRP) is gaining attention among portal developers and vendors to enable easy development, increased richness in functionality, pluggability, and flexibility of deployment. Whilst currently not supporting all WSRP functionalities, open-source portal frameworks could in future use WSRP Consumers to access remote portlets found from a WSRP Producer registry service. This implies that we need a central registry for the remote portlets and a more expressive WSRP Consumer interface to implement the remote portlet functions. This paper reports on an investigation into a new system architecture, which includes a Web Services repository, registry, and client interface. The Web Services repository holds portlets as remote resource producers. A new data structure for expressing remote portlets is found and published by populating a Universal Description, Discovery and Integration (UDDI) registry. A remote portlet publish and search engine for UDDI has also been developed. Finally, a remote portlet client interface was developed as a Web application. The client interface supports remote portlet features, as well as window status and mode functions. Copyright (c) 2007 John Wiley & Sons, Ltd.
Resumo:
No julgamento do recurso especial referente à ação ajuizada pela apresentadora Xuxa Meneghel para compelir o Google Search a desvincular dos seus índices de busca os resultados relativos à pesquisa sobre a expressão “Xuxa pedófila” ou qualquer outra que associasse o nome da autora a esta prática criminosa, a relatora da decisão, a Ministra Nancy Andrighi, definiu de maneira clara a controvérsia de que cuida este trabalho: o cotidiano de milhares de pessoas depende atualmente de informações que estão na web, e que dificilmente seriam encontradas sem a utilização das ferramentas de pesquisas oferecidas pelos sites de busca. Por outro lado, esses mesmos buscadores horizontais podem ser usados para a localização de páginas com informações, URLs prejudiciais resultantes da busca com o nome das pessoas. Diante disso, o que fazer? Existiria realmente um direito de ser esquecido, isto é, de ter uma URL resultante de uma pesquisa sobre o nome de uma pessoa desvinculado do índice de pesquisa do buscador horizontal? Há quem afirme que a medida mais apropriada para lidar com esse problema seria ir atrás do terceiro que publicou essa informação originariamente na web. Há também quem defenda que a proteção de um direito de ser esquecido representaria uma ameaça grande demais para a liberdade de expressão e de informação. Diante deste quadro, esta dissertação visa a estabelecer quais podem ser as características e os limites do direito ao esquecimento na era digital, de acordo com o estado atual da legislação brasileira a respeito, confrontando-se tal direito com outros direitos e interesses públicos e privados (especialmente o direito à liberdade de expressão e à informação) e levando em conta as características de funcionamento da própria rede mundial de computadores, em especial das ferramentas de buscas. Tendo em vista a importância dos buscadores horizontais no exercício do acesso à informação e, além disso, as dificuldades relacionadas à retirada de URLs de todos os sítios em que tenham sido publicadas, nossa pesquisa focará no potencial – e nas dificuldades – de se empregar a regulação de tais ferramentas de busca para a proteção eficaz do direito ao esquecimento na era digital.
Resumo:
VANTI, Nadia. Mapeamento das Instituições Federais de Ensino Superior da Região Nordeste do Brasil na Web. Informação & informação, Londrina, v. 15, p. 55-67, 2010
Resumo:
The popularization of the Internet has stimulated the appearance of Search Engines that have as their objective aid the users in the Web information research process. However, it s common for users to make queries and receive results which do not satisfy their initial needs. The Information Retrieval in Context (IRiX) technique allows for the information related to a specific theme to be related to the initial user query, enabling, in this way, better results. This study presents a prototype of a search engine based on contexts built from linguistic gatherings and on relationships defined by the user. The context information can be shared with softwares and other tool users with the objective of promoting a socialization of contexts
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Pós-graduação em Ciência da Informação - FFC
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The present article aims at reflecting about the discursive practices of writing and reading on the net, more specifically about the writing and reading methods used by the undergraduate and graduated teachers, based on an internet search engine. It’s of interest to investigate: i) the (hyper) textual relations established in the context thought as permitted by the electronic resources; ii) the discursive marks that arise (are arisen) in a singular way of reading (and/or writing). The set of material was produced during a university extension course about reading and cyberspace, whose context consisted of a drawing production of the reading process on an internet search engine, on the basis of the signifier “apple”. Based on the French Discourse Analysis and assumptions from the New Literacy Studies, we intended to discuss the operating procedures of the internet search and the effects of meanings produced by the subject during his/her reading/writing process.
Resumo:
We review recent visualization techniques aimed at supporting tasks that require the analysis of text documents, from approaches targeted at visually summarizing the relevant content of a single document to those aimed at assisting exploratory investigation of whole collections of documents.Techniques are organized considering their target input materialeither single texts or collections of textsand their focus, which may be at displaying content, emphasizing relevant relationships, highlighting the temporal evolution of a document or collection, or helping users to handle results from a query posed to a search engine.We describe the approaches adopted by distinct techniques and briefly review the strategies they employ to obtain meaningful text models, discuss how they extract the information required to produce representative visualizations, the tasks they intend to support and the interaction issues involved, and strengths and limitations. Finally, we show a summary of techniques, highlighting their goals and distinguishing characteristics. We also briefly discuss some open problems and research directions in the fields of visual text mining and text analytics.
Resumo:
Cogo K, de Andrade A, Labate CA, Bergamaschi CC, Berto LA, Franco GCN, Goncalves RB, Groppo FC. Proteomic analysis of Porphyromonas gingivalis exposed to nicotine and cotinine. J Periodont Res 2012; 47: 766775. (c) 2012 John Wiley & Sons A/S Background and Objective: Smokers are more predisposed than nonsmokers to infection with Porphyromonas gingivalis, one of the most important pathogens involved in the onset and development of periodontitis. It has also been observed that tobacco, and tobacco derivatives such as nicotine and cotinine, can induce modifications to P. gingivalis virulence. However, the effect of the major compounds derived from cigarettes on expression of protein by P.gingivalis is poorly understood. Therefore, this study aimed to evaluate and compare the effects of nicotine and cotinine on the P.gingivalis proteomic profile. Material and Methods: Total proteins of P gingivalis exposed to nicotine and cotinine were extracted and separated by two-dimensional electrophoresis. Proteins differentially expressed were successfully identified through liquid chromatography-mass spectrometry and primary sequence databases using MASCOT search engine, and gene ontology was carried out using DAVID tools. Results: Of the approximately 410 protein spots that were reproducibly detected on each gel, 23 were differentially expressed in at least one of the treatments. A particular increase was seen in proteins involved in metabolism, virulence and acquisition of peptides, protein synthesis and folding, transcription and oxidative stress. Few proteins showed significant decreases in expression; those that did are involved in cell envelope biosynthesis and proteolysis and also in metabolism. Conclusion: Our results characterized the changes in the proteome of P.gingivalis following exposure to nicotine and cotinine, suggesting that these substances may modulate, with minor changes, protein expression. The present study is, in part, a step toward understanding the potential smokepathogen interaction that may occur in smokers with periodontitis.
Resumo:
Background: The hypothalamus plays a pivotal role in numerous mechanisms highly relevant to the maintenance of body homeostasis, such as the control of food intake and energy expenditure. Impairment of these mechanisms has been associated with the metabolic disturbances involved in the pathogenesis of obesity. Since rodent species constitute important models for metabolism studies and the rat hypothalamus is poorly characterized by proteomic strategies, we performed experiments aimed at constructing a two-dimensional gel electrophoresis (2-DE) profile of rat hypothalamus proteins. Results: As a first step, we established the best conditions for tissue collection and protein extraction, quantification and separation. The extraction buffer composition selected for proteome characterization of rat hypothalamus was urea 7 M, thiourea 2 M, CHAPS 4%, Triton X-100 0.5%, followed by a precipitation step with chloroform/methanol. Two-dimensional (2-D) gels of hypothalamic extracts from four-month-old rats were analyzed; the protein spots were digested and identified by using tandem mass spectrometry and database query using the protein search engine MASCOT. Eighty-six hypothalamic proteins were identified, the majority of which were classified as participating in metabolic processes, consistent with the finding of a large number of proteins with catalytic activity. Genes encoding proteins identified in this study have been related to obesity development. Conclusion: The present results indicate that the 2-DE technique will be useful for nutritional studies focusing on hypothalamic proteins. The data presented herein will serve as a reference database for studies testing the effects of dietary manipulations on hypothalamic proteome. We trust that these experiments will lead to important knowledge on protein targets of nutritional variables potentially able to affect the complex central nervous system control of energy homeostasis.
Resumo:
[ES] E-NATURAL es un portal web donde se localizan empresas del sector del turismo rural, en este portal se publicitan y venden sus productos y servicios. Cada empresa dispone de un espacio web único e individual para poder promocionarse en internet. Mediante un buscador, permite a los usuarios acceder a los contenidos de cada empresa registrada en el sistema. Este buscador es abierto y cualquier usuario no registrado puede consultar la información acerca de productos y servicios ofertados, y disponer de toda la información relacionada con cada empresa. Las empresas registradas disponen de un sistema de información completo de fácil manejo e intuitivo que permite autogestionar todo el contenido de los productos y páginas web de cada empresa individualmente. También se incluye un sistema de gestión de contenidos que genera páginas web profesionales automáticamente, con posibilidad de edición de páginas. Por otra parte, los usuarios registrados podrán realizar: reservas de productos mediante un completo sistema de gestión de reservas, con especial atención al alojamiento, compras de productos mediante un completo sistema de compras, adaptado a la plataforma Paypal, clasificaciones de productos y páginas web del sistema, utilizando votaciones mediante rankings. La plataforma contiene un sistema de gestión de comentarios sobre productos y páginas web de empresas que permite seleccionar la visualización y la no visualización del contenido. Por último, los usuarios podrán compartir información sobre contenidos publicados en las páginas, mediante el uso de redes sociales como Twitter, Google+ y Facebook.
Resumo:
Matita (that means pencil in Italian) is a new interactive theorem prover under development at the University of Bologna. When compared with state-of-the-art proof assistants, Matita presents both traditional and innovative aspects. The underlying calculus of the system, namely the Calculus of (Co)Inductive Constructions (CIC for short), is well-known and is used as the basis of another mainstream proof assistant—Coq—with which Matita is to some extent compatible. In the same spirit of several other systems, proof authoring is conducted by the user as a goal directed proof search, using a script for storing textual commands for the system. In the tradition of LCF, the proof language of Matita is procedural and relies on tactic and tacticals to proceed toward proof completion. The interaction paradigm offered to the user is based on the script management technique at the basis of the popularity of the Proof General generic interface for interactive theorem provers: while editing a script the user can move forth the execution point to deliver commands to the system, or back to retract (or “undo”) past commands. Matita has been developed from scratch in the past 8 years by several members of the Helm research group, this thesis author is one of such members. Matita is now a full-fledged proof assistant with a library of about 1.000 concepts. Several innovative solutions spun-off from this development effort. This thesis is about the design and implementation of some of those solutions, in particular those relevant for the topic of user interaction with theorem provers, and of which this thesis author was a major contributor. Joint work with other members of the research group is pointed out where needed. The main topics discussed in this thesis are briefly summarized below. Disambiguation. Most activities connected with interactive proving require the user to input mathematical formulae. Being mathematical notation ambiguous, parsing formulae typeset as mathematicians like to write down on paper is a challenging task; a challenge neglected by several theorem provers which usually prefer to fix an unambiguous input syntax. Exploiting features of the underlying calculus, Matita offers an efficient disambiguation engine which permit to type formulae in the familiar mathematical notation. Step-by-step tacticals. Tacticals are higher-order constructs used in proof scripts to combine tactics together. With tacticals scripts can be made shorter, readable, and more resilient to changes. Unfortunately they are de facto incompatible with state-of-the-art user interfaces based on script management. Such interfaces indeed do not permit to position the execution point inside complex tacticals, thus introducing a trade-off between the usefulness of structuring scripts and a tedious big step execution behavior during script replaying. In Matita we break this trade-off with tinycals: an alternative to a subset of LCF tacticals which can be evaluated in a more fine-grained manner. Extensible yet meaningful notation. Proof assistant users often face the need of creating new mathematical notation in order to ease the use of new concepts. The framework used in Matita for dealing with extensible notation both accounts for high quality bidimensional rendering of formulae (with the expressivity of MathMLPresentation) and provides meaningful notation, where presentational fragments are kept synchronized with semantic representation of terms. Using our approach interoperability with other systems can be achieved at the content level, and direct manipulation of formulae acting on their rendered forms is possible too. Publish/subscribe hints. Automation plays an important role in interactive proving as users like to delegate tedious proving sub-tasks to decision procedures or external reasoners. Exploiting the Web-friendliness of Matita we experimented with a broker and a network of web services (called tutors) which can try independently to complete open sub-goals of a proof, currently being authored in Matita. The user receives hints from the tutors on how to complete sub-goals and can interactively or automatically apply them to the current proof. Another innovative aspect of Matita, only marginally touched by this thesis, is the embedded content-based search engine Whelp which is exploited to various ends, from automatic theorem proving to avoiding duplicate work for the user. We also discuss the (potential) reusability in other systems of the widgets presented in this thesis and how we envisage the evolution of user interfaces for interactive theorem provers in the Web 2.0 era.