923 resultados para Search Engine Optimization Methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta dissertação resulta de uma investigação que levou a cabo um estudo webométrico sobre a presença das universidades portuguesas na Web, avaliando a visibilidade das instituições através do cálculo de um indicador webométrico, o Web Impact Factor. A World Wide Web é, na atualidade, um dos principais meios de difusão de Informação. Os Estudos Métricos da Informação visam quantificar e avaliar a produção de Informação, objeto de estudo de disciplinas como a Infometria, a Cienciometria e a Bibliometria. Recentemente, surgiram a Cibermetria e a Webometria como novas disciplinas que estudam a produção e difusão da Informação no contexto do Ciberespaço e da World Wide Web, respetivamente. As universidades, enquanto polos privilegiados de produção e difusão de conhecimento, são o objeto de estudo natural da Webometria e a avaliação da sua presença na World Wide Web contribui para a análise do desempenho destas instituições. Para a realização deste trabalho foi adotada a metodologia proposta por Noruzi, que calcula três categorias de Web Impact Factor: o WIF Total, o WIF Revisto e o Selflink WIF. De modo a calcular estas categorias, foram recolhidos dados quantitativos de inlinks, selflinks, número total de páginas e número de páginas indexadas pelo motor de pesquisa. O motor de pesquisa utilizado foi o Altavista, tendo sido realizadas pesquisas de expressões booleanas durante o primeiro semestre de 2009. Após a recolha, os dados foram tratados estatisticamente e procedeu-se ao cálculo das categorias do WIF. Conclui-se que existe uma maior visibilidade das universidades públicas portuguesas porque obtêm melhores resultados ao nível de duas categorias do Web Impact Factor: o WIF Revisto e o Selflink WIF.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Different optimization methods can be employed to optimize a numerical estimate for the match between an instantiated object model and an image. In order to take advantage of gradient-based optimization methods, perspective inversion must be used in this context. We show that convergence can be very fast by extrapolating to maximum goodness-of-fit with Newton's method. This approach is related to methods which either maximize a similar goodness-of-fit measure without use of gradient information, or else minimize distances between projected model lines and image features. Newton's method combines the accuracy of the former approach with the speed of convergence of the latter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances made over the past decade in structure determination from powder diffraction data are reviewed with particular emphasis on algorithmic developments and the successes and limitations of the technique. While global optimization methods have been successful in the solution of molecular crystal structures, new methods are required to make the solution of inorganic crystal structures more routine. The use of complementary techniques such as NMR to assist structure solution is discussed and the potential for the combined use of X-ray and neutron diffraction data for structure verification is explored. Structures that have proved difficult to solve from powder diffraction data are reviewed and the limitations of structure determination from powder diffraction data are discussed. Furthermore, the prospects of solving small protein crystal structures over the next decade are assessed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The CAFS search engine is a real machine in a virtual machine world; it is the hardware component of ICL's CAFS system. The paper is an introduction and prelude to the set of papers in this volume on CAFS applications. It defines The CAFS system and its context together with the function of its hardware and software components. It examines CAFS' role in the broad context of application development and information systems; it highlights some techniques and applications which exploit the CAFS system. Finally, it concludes with some suggestions for possible further developments. 'Search out thy wit for secret policies And we will make thee famous through the world' Henry VI, 1:3

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have designed a highly parallel design for a simple genetic algorithm using a pipeline of systolic arrays. The systolic design provides high throughput and unidirectional pipelining by exploiting the implicit parallelism in the genetic operators. The design is significant because, unlike other hardware genetic algorithms, it is independent of both the fitness function and the particular chromosome length used in a problem. We have designed and simulated a version of the mutation array using Xilinix FPGA tools to investigate the feasibility of hardware implementation. A simple 5-chromosome mutation array occupies 195 CLBs and is capable of performing more than one million mutations per second. I. Introduction Genetic algorithms (GAs) are established search and optimization techniques which have been applied to a range of engineering and applied problems with considerable success [1]. They operate by maintaining a population of trial solutions encoded, using a suitable encoding scheme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A parallel hardware random number generator for use with a VLSI genetic algorithm processing device is proposed. The design uses an systolic array of mixed congruential random number generators. The generators are constantly reseeded with the outputs of the proceeding generators to avoid significant biasing of the randomness of the array which would result in longer times for the algorithm to converge to a solution. 1 Introduction In recent years there has been a growing interest in developing hardware genetic algorithm devices [1, 2, 3]. A genetic algorithm (GA) is a stochastic search and optimization technique which attempts to capture the power of natural selection by evolving a population of candidate solutions by a process of selection and reproduction [4]. In keeping with the evolutionary analogy, the solutions are called chromosomes with each chromosome containing a number of genes. Chromosomes are commonly simple binary strings, the bits being the genes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parameters to be determined in a least squares refinement calculation to fit a set of observed data may sometimes usefully be `predicated' to values obtained from some independent source, such as a theoretical calculation. An algorithm for achieving this in a least squares refinement calculation is described, which leaves the operator in full control of the weight that he may wish to attach to the predicate values of the parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances made over the past decade in structure determination from powder diffraction data are reviewed with particular emphasis on algorithmic developments and the successes and limitations of the technique. While global optimization methods have been successful in the solution of molecular crystal structures, new methods are required to make the solution of inorganic crystal structures more routine. The use of complementary techniques such as NMR to assist structure solution is discussed and the potential for the combined use of X-ray and neutron diffraction data for structure verification is explored. Structures that have proved difficult to solve from powder diffraction data are reviewed and the limitations of structure determination from powder diffraction data are discussed. Furthermore, the prospects of solving small protein crystal structures over the next decade are assessed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Web Services for Remote Portlets (WSRP) is gaining attention among portal developers and vendors to enable easy development, increased richness in functionality, pluggability, and flexibility of deployment. Whilst currently not supporting all WSRP functionalities, open-source portal frameworks could in future use WSRP Consumers to access remote portlets found from a WSRP Producer registry service. This implies that we need a central registry for the remote portlets and a more expressive WSRP Consumer interface to implement the remote portlet functions. This paper reports on an investigation into a new system architecture, which includes a Web Services repository, registry, and client interface. The Web Services repository holds portlets as remote resource producers. A new data structure for expressing remote portlets is found and published by populating a Universal Description, Discovery and Integration (UDDI) registry. A remote portlet publish and search engine for UDDI has also been developed. Finally, a remote portlet client interface was developed as a Web application. The client interface supports remote portlet features, as well as window status and mode functions. Copyright (c) 2007 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new database of weather and circulation type catalogs is presented comprising 17 automated classification methods and five subjective classifications. It was compiled within COST Action 733 "Harmonisation and Applications of Weather Type Classifications for European regions" in order to evaluate different methods for weather and circulation type classification. This paper gives a technical description of the included methods using a new conceptual categorization for classification methods reflecting the strategy for the definition of types. Methods using predefined types include manual and threshold based classifications while methods producing types derived from the input data include those based on eigenvector techniques, leader algorithms and optimization algorithms. In order to allow direct comparisons between the methods, the circulation input data and the methods' configuration were harmonized for producing a subset of standard catalogs of the automated methods. The harmonization includes the data source, the climatic parameters used, the classification period as well as the spatial domain and the number of types. Frequency based characteristics of the resulting catalogs are presented, including variation of class sizes, persistence, seasonal and inter-annual variability as well as trends of the annual frequency time series. The methodological concept of the classifications is partly reflected by these properties of the resulting catalogs. It is shown that the types of subjective classifications compared to automated methods show higher persistence, inter-annual variation and long-term trends. Among the automated classifications optimization methods show a tendency for longer persistence and higher seasonal variation. However, it is also concluded that the distance metric used and the data preprocessing play at least an equally important role for the properties of the resulting classification compared to the algorithm used for type definition and assignment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a novel approach for multispectral image contextual classification by combining iterative combinatorial optimization algorithms. The pixel-wise decision rule is defined using a Bayesian approach to combine two MRF models: a Gaussian Markov Random Field (GMRF) for the observations (likelihood) and a Potts model for the a priori knowledge, to regularize the solution in the presence of noisy data. Hence, the classification problem is stated according to a Maximum a Posteriori (MAP) framework. In order to approximate the MAP solution we apply several combinatorial optimization methods using multiple simultaneous initializations, making the solution less sensitive to the initial conditions and reducing both computational cost and time in comparison to Simulated Annealing, often unfeasible in many real image processing applications. Markov Random Field model parameters are estimated by Maximum Pseudo-Likelihood (MPL) approach, avoiding manual adjustments in the choice of the regularization parameters. Asymptotic evaluations assess the accuracy of the proposed parameter estimation procedure. To test and evaluate the proposed classification method, we adopt metrics for quantitative performance assessment (Cohen`s Kappa coefficient), allowing a robust and accurate statistical analysis. The obtained results clearly show that combining sub-optimal contextual algorithms significantly improves the classification performance, indicating the effectiveness of the proposed methodology. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho acadêmico é fruto da observação profissional cotidiana acerca da relação do Estado e de suas entidades de Direito Público com o particular. De modo algum propende a deslustrar teorias, opiniões e suporte jurídico favoráveis ao modelo diferenciado e casuisticamente pró-estatal vigente. Assim, na linha do eixo acadêmico-científico deste Mestrado, de caráter marcadamente profissional, buscou-se no campo do pluralismo de idéias descrever, num diapasão dialético, o contexto factual e jurídico-legal consoante os dois primeiros capítulos, para assim ensejar discussão e reflexão sobre matéria que se oferece relevante para a efetiva melhoria dos serviços jurisdicionais, submetendo-os, a seguir, a diretivas teóricas e, em particular, à compreensão contextual de nossa ordem constitucional. Partiu-se assim, de situações concretas vivenciadas no ambiente forense de uma unidade da Justiça Federal (2ª Vara da Justiça Federal de Petrópolis, da Seção Judiciária do Estado do Rio de Janeiro), sabidamente competente para as causas em que a União, entidades autárquicas ou empresa pública federal forem interessadas na condição de autoras, rés, assistentes ou opoentes1. O tema central do estudo são as prerrogativas processuais da Fazenda Pública. Vem de longe um conjunto de protetivo processual em seu favor. Para ficarmos no século XX, por exemplo, o art. 32 do Decreto-Lei nº 1.608, de 18 de setembro de 1939 (Código de Processo Civil) já explicitava: “Art. 32. Aos representantes da Fazenda Pública contar-se-ão em quádruplo os prazos para a contestação e em dobro para a interposição de recurso.” O Código de Processo Civil atual conforme destacado na parte descritiva do texto, cuidou de aperfeiçoar e ampliar esse suporte pró-fazendário, como exemplo, o dispositivo mais conhecido é, seguramente, o art. 188 do Código de Processo Civil. No entanto, a multiplicidade de avanços no seio da sociedade brasileira – basicamente nos planos político, constitucional, legal, social, econômico, cultural, global e tecnológico – trouxe como corolário o imperativo da otimização dos mecanismos voltados para o que denominamos no trabalho de acesso qualificado à Justiça. Esse conjunto de fatores, em realidade, acha-se forrado pelos princípios da igualdade e da isonomia que permeiam todo o arcabouço de conquistas asseguradas no corpo político-jurídico constitucional. Nas palavras do professor e atual Ministro do Supremo Tribunal Federal Luiz Fux2, a neutralidade, sobretudo do juiz, constitui fator impediente para o magistrado manter a igualdade das partes na relação jurídica processual. Claro, frise-se, tanto quanto possível, isto é, observando a lei que, ao eventualmente promover, pontualmente, certo grau distintivo, o faça comprometida com a efetiva correção de discrímen para assim encontrar e assegurar a igualdade. Deve fazêlo, na linha desse pensamento, de modo a impedir que o resultado da aplicação da norma não seja expressão da deficiência e do desmerecimento de uma das partes em juízo. Tudo considerado importa que o entendimento ora realçado não se destine apenas ao juiz, mas no caso, também ao legislador, fonte criadora da normatividade posta em evidência.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

No julgamento do recurso especial referente à ação ajuizada pela apresentadora Xuxa Meneghel para compelir o Google Search a desvincular dos seus índices de busca os resultados relativos à pesquisa sobre a expressão “Xuxa pedófila” ou qualquer outra que associasse o nome da autora a esta prática criminosa, a relatora da decisão, a Ministra Nancy Andrighi, definiu de maneira clara a controvérsia de que cuida este trabalho: o cotidiano de milhares de pessoas depende atualmente de informações que estão na web, e que dificilmente seriam encontradas sem a utilização das ferramentas de pesquisas oferecidas pelos sites de busca. Por outro lado, esses mesmos buscadores horizontais podem ser usados para a localização de páginas com informações, URLs prejudiciais resultantes da busca com o nome das pessoas. Diante disso, o que fazer? Existiria realmente um direito de ser esquecido, isto é, de ter uma URL resultante de uma pesquisa sobre o nome de uma pessoa desvinculado do índice de pesquisa do buscador horizontal? Há quem afirme que a medida mais apropriada para lidar com esse problema seria ir atrás do terceiro que publicou essa informação originariamente na web. Há também quem defenda que a proteção de um direito de ser esquecido representaria uma ameaça grande demais para a liberdade de expressão e de informação. Diante deste quadro, esta dissertação visa a estabelecer quais podem ser as características e os limites do direito ao esquecimento na era digital, de acordo com o estado atual da legislação brasileira a respeito, confrontando-se tal direito com outros direitos e interesses públicos e privados (especialmente o direito à liberdade de expressão e à informação) e levando em conta as características de funcionamento da própria rede mundial de computadores, em especial das ferramentas de buscas. Tendo em vista a importância dos buscadores horizontais no exercício do acesso à informação e, além disso, as dificuldades relacionadas à retirada de URLs de todos os sítios em que tenham sido publicadas, nossa pesquisa focará no potencial – e nas dificuldades – de se empregar a regulação de tais ferramentas de busca para a proteção eficaz do direito ao esquecimento na era digital.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

VANTI, Nadia. Mapeamento das Instituições Federais de Ensino Superior da Região Nordeste do Brasil na Web. Informação & informação, Londrina, v. 15, p. 55-67, 2010