971 resultados para search engine optimization


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introdução: Em Endodontia, a anestesia local é o método de controlo de dor mais utilizado, no entanto vários estudos revelaram que as técnicas anestésicas convencionais apresentam uma eficácia reduzida em casos sintomáticos. Existem várias alternativas às técnicas e anestésicos convencionais, assim como anestesias suplementares que podem ser utilizadas para aumentar a profundidade da anestesia pulpar, e que devem fazer parte do arsenal clínico de modo a possibilitar um tratamento indolor ao paciente. Objetivo: O presente trabalho visou reunir e analisar bibliografia sobre anestesia local em Endodontia e fatores que podem influenciar a sua administração. Foram abordadas técnicas e anestésicos utilizados atualmente, assim como outros métodos estudados recentemente, sendo destacada a eficácia destes na anestesia de pacientes diagnosticados com pulpite irreversível. Materiais e métodos: Foi realizada uma pesquisa bibliográfica no motor de busca Pubmed, tendo sido utilizadas as seguintes palavras chave: “Anesthesia”, “Local anesthesia”, “Anesthesia Technique”, “Anesthetic efficacy”, “Endodontics”, “Lidocaine”, “Articaine”, “Pulpitis”. Estabeleceu-se uma limitação temporal de 2005 a 2016, tendo sido incluídos 54 artigos com ênfase em estudos do tipo meta-análise, revisões bibliográficas e estudos clínicos controlados e randomizados. Conclusão: Em pacientes sintomáticos, de modo a controlar a dor pré-operatória, torna-se muitas vezes necessária a utilização de anestésicos de maior potência e de técnicas suplementares. Aconselham-se, por isso, técnicas como a injeção intraligamentar, intraóssea e infiltrações suplementares para assegurar a anestesia pulpar após técnicas primárias falhadas. Deve-se, ainda, ter em consideração a sensibilidade que alguns pacientes apresentam a determinados componentes presentes nos anestésicos locais, exigindo-se um especial cuidado na seleção e administração destes agentes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conventional web search engines are centralised in that a single entity crawls and indexes the documents selected for future retrieval, and the relevance models used to determine which documents are relevant to a given user query. As a result, these search engines suffer from several technical drawbacks such as handling scale, timeliness and reliability, in addition to ethical concerns such as commercial manipulation and information censorship. Alleviating the need to rely entirely on a single entity, Peer-to-Peer (P2P) Information Retrieval (IR) has been proposed as a solution, as it distributes the functional components of a web search engine – from crawling and indexing documents, to query processing – across the network of users (or, peers) who use the search engine. This strategy for constructing an IR system poses several efficiency and effectiveness challenges which have been identified in past work. Accordingly, this thesis makes several contributions towards advancing the state of the art in P2P-IR effectiveness by improving the query processing and relevance scoring aspects of a P2P web search. Federated search systems are a form of distributed information retrieval model that route the user’s information need, formulated as a query, to distributed resources and merge the retrieved result lists into a final list. P2P-IR networks are one form of federated search in routing queries and merging result among participating peers. The query is propagated through disseminated nodes to hit the peers that are most likely to contain relevant documents, then the retrieved result lists are merged at different points along the path from the relevant peers to the query initializer (or namely, customer). However, query routing in P2P-IR networks is considered as one of the major challenges and critical part in P2P-IR networks; as the relevant peers might be lost in low-quality peer selection while executing the query routing, and inevitably lead to less effective retrieval results. This motivates this thesis to study and propose query routing techniques to improve retrieval quality in such networks. Cluster-based semi-structured P2P-IR networks exploit the cluster hypothesis to organise the peers into similar semantic clusters where each such semantic cluster is managed by super-peers. In this thesis, I construct three semi-structured P2P-IR models and examine their retrieval effectiveness. I also leverage the cluster centroids at the super-peer level as content representations gathered from cooperative peers to propose a query routing approach called Inverted PeerCluster Index (IPI) that simulates the conventional inverted index of the centralised corpus to organise the statistics of peers’ terms. The results show a competitive retrieval quality in comparison to baseline approaches. Furthermore, I study the applicability of using the conventional Information Retrieval models as peer selection approaches where each peer can be considered as a big document of documents. The experimental evaluation shows comparative and significant results and explains that document retrieval methods are very effective for peer selection that brings back the analogy between documents and peers. Additionally, Learning to Rank (LtR) algorithms are exploited to build a learned classifier for peer ranking at the super-peer level. The experiments show significant results with state-of-the-art resource selection methods and competitive results to corresponding classification-based approaches. Finally, I propose reputation-based query routing approaches that exploit the idea of providing feedback on a specific item in the social community networks and manage it for future decision-making. The system monitors users’ behaviours when they click or download documents from the final ranked list as implicit feedback and mines the given information to build a reputation-based data structure. The data structure is used to score peers and then rank them for query routing. I conduct a set of experiments to cover various scenarios including noisy feedback information (i.e, providing positive feedback on non-relevant documents) to examine the robustness of reputation-based approaches. The empirical evaluation shows significant results in almost all measurement metrics with approximate improvement more than 56% compared to baseline approaches. Thus, based on the results, if one were to choose one technique, reputation-based approaches are clearly the natural choices which also can be deployed on any P2P network.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O Reconhecimento de Entidades Mencionadas tem como objectivo identificar e classificar entidades, baseando-se em determinadas categorias ou etiquetas, contidas em textos escritos em linguagem natural. O Sistema de Reconhecimento de Entidades Mencionadas implementado na elaboração desta Dissertação pretende identificar localidades presentes em textos informais e definir para cada localidade identificada uma das etiquetas “aldeia", "vila" ou “cidade" numa primeira aproximação ao problema. Numa segunda aproximação tiveram-se em conta as etiquetas "freguesia", "concelho" e "distrito". Para a obtenção das classificações das entidades procedeu-se a uma análise estatística do número de resultados obtidos numa pesquisa de uma entidade precedida por uma etiqueta usando o motor de pesquisa Google Search. ABSTRACT: Named Entitity Recognition has the objective of identifying and classifying entities, according to certain categories or labels, contained in texts written in natural language. The Named Entitity Recognition system implemented in the developing of this dissertation intends to identify localities in informal texts, setting for each one of these localities identified one of the labels "aldeia", ''vila" or "cidade" in a first approach to the problem. ln a second approach the labels "freguesia", "concelho" and "distrito" were taken in consideration. To obtain classifications for the entities a statistical analysis of the number of results returned by a search of an entity preceded by a label using Google search engine was performed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract Aim: To identify nursing interventions aimed at persons with venous, arterial or mixed leg ulcers. Methodology: Carried out research in the EBSCO search engine: CINAHL Plus with Full Text, MEDLINE with Full Text, MedicLatina, Academic Search Complete, sought full text articles, published between 2008/01/01 and 2015/01/31, with the following keywords [(MM "leg ulcer") OR (wound care) OR (wound healing)] AND [(nursing) OR (nursing assessment) OR (nursing intervention)], filtered through initial question in PI[C]O format. Results: The different etiologies of leg ulcer require a specific therapeutic and prophylactic approach. Factors that promote healing were identified: individualization of care, interpersonal relationship, pain control, control of the exudate, education for health self-management, self-care, therapeutic adherence, implementation of guidelines of good practice and auditing and feedback of the practices. Conclusion: Person-centred care and practices based on evidence improves health results in prevention and treatment of leg ulcers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Most of the existing open-source search engines, utilize keyword or tf-idf based techniques to find relevant documents and web pages relative to an input query. Although these methods, with the help of a page rank or knowledge graphs, proved to be effective in some cases, they often fail to retrieve relevant instances for more complicated queries that would require a semantic understanding to be exploited. In this Thesis, a self-supervised information retrieval system based on transformers is employed to build a semantic search engine over the library of Gruppo Maggioli company. Semantic search or search with meaning can refer to an understanding of the query, instead of simply finding words matches and, in general, it represents knowledge in a way suitable for retrieval. We chose to investigate a new self-supervised strategy to handle the training of unlabeled data based on the creation of pairs of ’artificial’ queries and the respective positive passages. We claim that by removing the reliance on labeled data, we may use the large volume of unlabeled material on the web without being limited to languages or domains where labeled data is abundant.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La ricerca di documenti rilevanti è un task fondamentale, può avvenire tramite ambienti chiusi, come le biblioteche digitali o tramite ambienti aperti, come il World Wide Web. Quello che analizzeremo in questo progetto di tesi riguarderà le interfacce per mostrare i risultati di ricerca su una collezione di documenti. L'obiettivo, tuttavia, non è l'analisi dei motori di ricerca, ma analizzare i diversi meccanismi che permettono di visualizzare i risultati. Vedremo, inoltre, le diverse visualizzazioni rilevanti nella ricerca di informazioni sul web, in particolare parleremo di visualizzazioni nello spazio, messe a confronto con la classica visualizzazione testuale. Analizzeremo anche la classificazione in una collezione di documenti, oltre che la personalizzazione, ovvero la configurazione della visualizzazione a vantaggio dell'utente. Una volta trovati i documenti rilevanti, analizzeremo i frammenti di testo come, gli snippet, i riassunti descrittivi e gli abstract, mettendo in luce il modo in cui essi aiutato l'utente a migliorare l'accesso a determinati tipi di risultati. Infine, andremo ad analizzare le visualizzazioni di frammenti rilevanti all'interno di un testo. In particolare presenteremo le tecniche di navigazione e la ricerca di determinate parole all'interno di documenti, vale a dire le panoramiche di documenti e del metodo preferito dall'utente per cercare una parola.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Screening of topologies developed by hierarchical heuristic procedures can be carried out by comparing their optimal performance. In this work we will be exploiting mono-objective process optimization using two algorithms, simulated annealing and tabu search, and four different objective functions: two of the net present value type, one of them including environmental costs and two of the global potential impact type. The hydrodealkylation of toluene to produce benzene was used as case study, considering five topologies with different complexities mainly obtained by including or not liquid recycling and heat integration. The performance of the algorithms together with the objective functions was observed, analyzed and discussed from various perspectives: average deviation of results for each algorithm, capacity for producing high purity product, screening of topologies, objective functions robustness in screening of topologies, trade-offs between economic and environmental type objective functions and variability of optimum solutions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Search Optimization methods are needed to solve optimization problems where the objective function and/or constraints functions might be non differentiable, non convex or might not be possible to determine its analytical expressions either due to its complexity or its cost (monetary, computational, time,...). Many optimization problems in engineering and other fields have these characteristics, because functions values can result from experimental or simulation processes, can be modelled by functions with complex expressions or by noise functions and it is impossible or very difficult to calculate their derivatives. Direct Search Optimization methods only use function values and do not need any derivatives or approximations of them. In this work we present a Java API that including several methods and algorithms, that do not use derivatives, to solve constrained and unconstrained optimization problems. Traditional API access, by installing it on the developer and/or user computer, and remote API access to it, using Web Services, are also presented. Remote access to the API has the advantage of always allow the access to the latest version of the API. For users that simply want to have a tool to solve Nonlinear Optimization Problems and do not want to integrate these methods in applications, also two applications were developed. One is a standalone Java application and the other a Web-based application, both using the developed API.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Constrained nonlinear optimization problems are usually solved using penalty or barrier methods combined with unconstrained optimization methods. Another alternative used to solve constrained nonlinear optimization problems is the lters method. Filters method, introduced by Fletcher and Ley er in 2002, have been widely used in several areas of constrained nonlinear optimization. These methods treat optimization problem as bi-objective attempts to minimize the objective function and a continuous function that aggregates the constraint violation functions. Audet and Dennis have presented the rst lters method for derivative-free nonlinear programming, based on pattern search methods. Motivated by this work we have de- veloped a new direct search method, based on simplex methods, for general constrained optimization, that combines the features of the simplex method and lters method. This work presents a new variant of these methods which combines the lters method with other direct search methods and are proposed some alternatives to aggregate the constraint violation functions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Locating and identifying points as global minimizers is, in general, a hard and time-consuming task. Difficulties increase in the impossibility of using the derivatives of the functions defining the problem. In this work, we propose a new class of methods suited for global derivative-free constrained optimization. Using direct search of directional type, the algorithm alternates between a search step, where potentially good regions are located, and a poll step where the previously located promising regions are explored. This exploitation is made through the launching of several instances of directional direct searches, one in each of the regions of interest. Differently from a simple multistart strategy, direct searches will merge when sufficiently close. The goal is to end with as many direct searches as the number of local minimizers, which would easily allow locating the global extreme value. We describe the algorithmic structure considered, present the corresponding convergence analysis and report numerical results, showing that the proposed method is competitive with currently commonly used global derivative-free optimization solvers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In order to correctly assess the biaxial fatigue material properties one must experimentally test different load conditions and stress levels. With the rise of new in-plane biaxial fatigue testing machines, using smaller and more efficient electrical motors, instead of the conventional hydraulic machines, it is necessary to reduce the specimen size and to ensure that the specimen geometry is appropriate for the load capacity installed. At the present time there are no standard specimen's geometries and the indications on literature how to design an efficient test specimen are insufficient. The main goal of this paper is to present the methodology on how to obtain an optimal cruciform specimen geometry, with thickness reduction in the gauge area, appropriate for fatigue crack initiation, as a function of the base material sheet thickness used to build the specimen. The geometry is optimized for maximum stress using several parameters, ensuring that in the gauge area the stress distributions on the loading directions are uniform and maximum with two limit phase shift loading conditions (delta = 0 degrees and (delta = 180 degrees). Therefore the fatigue damage will always initiate on the center of the specimen, avoiding failure outside this region. Using the Renard Series of preferred numbers for the base material sheet thickness as a reference, the reaming geometry parameters are optimized using a derivative-free methodology, called direct multi search (DMS) method. The final optimal geometry as a function of the base material sheet thickness is proposed, as a guide line for cruciform specimens design, and as a possible contribution for a future standard on in-plane biaxial fatigue tests

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Optimization is a very important field for getting the best possible value for the optimization function. Continuous optimization is optimization over real intervals. There are many global and local search techniques. Global search techniques try to get the global optima of the optimization problem. However, local search techniques are used more since they try to find a local minimal solution within an area of the search space. In Continuous Constraint Satisfaction Problems (CCSP)s, constraints are viewed as relations between variables, and the computations are supported by interval analysis. The continuous constraint programming framework provides branch-and-prune algorithms for covering sets of solutions for the constraints with sets of interval boxes which are the Cartesian product of intervals. These algorithms begin with an initial crude cover of the feasible space (the Cartesian product of the initial variable domains) which is recursively refined by interleaving pruning and branching steps until a stopping criterion is satisfied. In this work, we try to find a convenient way to use the advantages in CCSP branchand- prune with local search of global optimization applied locally over each pruned branch of the CCSP. We apply local search techniques of continuous optimization over the pruned boxes outputted by the CCSP techniques. We mainly use steepest descent technique with different characteristics such as penalty calculation and step length. We implement two main different local search algorithms. We use “Procure”, which is a constraint reasoning and global optimization framework, to implement our techniques, then we produce and introduce our results over a set of benchmarks.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper deals with the design of optimal multiple gravity assist trajectories with deep space manoeuvres. A pruning method which considers the sequential nature of the problem is presented. The method locates feasible vectors using local optimization and applies a clustering algorithm to find reduced bounding boxes which can be used in a subsequent optimization step. Since multiple local minima remain within the pruned search space, the use of a global optimization method, such as Differential Evolution, is suggested for finding solutions which are likely to be close to the global optimum. Two case studies are presented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.