930 resultados para Web Search Behaviour


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Information and Communication Technology (ICT) provide new strategies for disseminating information and new communication models in order to change attitudes and human behaviour concerning to education. Nowadays the internet is crucial as a means of communication and information sharing. To education or tutorship will be required to use ICT, supported on the internet, to establish the communication of teacher-student and student-student, disseminating the content of the subjects, and as a way of teaching and learning process. This paper presents an intelligent tutor that aims to be a tool to support teaching and learning in the field of the electrical engineering project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web tornou-se uma ferramenta indispensável para a sociedade moderna. A capacidade de aceder a enormes quantidades de informação, disponível em praticamente todo o mundo, é uma grande vantagem para as nossas vidas. No entanto, a quantidade avassaladora de informação disponível torna-se um problema, que é o de encontrar a informação que precisamos no meio de muita informação irrelevante. Para nos ajudar nesta tarefa, foram criados poderosos motores de pesquisa online, que esquadrinham a Web à procura dos melhores resultados, segundo os seus critérios, para os dados que precisamos. Actualmente, os motores de pesquisa em voga, usam um formato de apresentação de resultados simples, que consiste apenas numa caixa de texto para o utilizador inserir as palavras-chave sobre o tema que quer pesquisar e os resultados são dispostos sobre uma lista de hiperligações ordenada pela relevância que o motor atribui a cada resultado. Porém, existem outras formas de apresentar resultados. Uma das alternativas é apresentar os resultados sobre interfaces em 3 dimensões. É nestes tipos de sistemas que este trabalho vai incidir, os motores de pesquisa com interfaces em 3 dimensões. O problema é que as páginas Web não estão preparadas para serem consumidas por este tipo de motores de pesquisa. Para resolver este problema foi construído um modelo generalista para páginas Web, que consegue alimentar os requisitos das diversas variantes destes motores de pesquisa. Foi também desenvolvido um protótipo de instanciação automático, que recolhe as informações necessárias das páginas Web e preenche o modelo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When exploring a virtual environment, realism depends mainly on two factors: realistic images and real-time feedback (motions, behaviour etc.). In this context, photo realism and physical validity of computer generated images required by emerging applications, such as advanced e-commerce, still impose major challenges in the area of rendering research whereas the complexity of lighting phenomena further requires powerful and predictable computing if time constraints must be attained. In this technical report we address the state-of-the-art on rendering, trying to put the focus on approaches, techniques and technologies that might enable real-time interactive web-based clientserver rendering systems. The focus is on the end-systems and not the networking technologies used to interconnect client(s) and server(s).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Search Optimization methods are needed to solve optimization problems where the objective function and/or constraints functions might be non differentiable, non convex or might not be possible to determine its analytical expressions either due to its complexity or its cost (monetary, computational, time,...). Many optimization problems in engineering and other fields have these characteristics, because functions values can result from experimental or simulation processes, can be modelled by functions with complex expressions or by noise functions and it is impossible or very difficult to calculate their derivatives. Direct Search Optimization methods only use function values and do not need any derivatives or approximations of them. In this work we present a Java API that including several methods and algorithms, that do not use derivatives, to solve constrained and unconstrained optimization problems. Traditional API access, by installing it on the developer and/or user computer, and remote API access to it, using Web Services, are also presented. Remote access to the API has the advantage of always allow the access to the latest version of the API. For users that simply want to have a tool to solve Nonlinear Optimization Problems and do not want to integrate these methods in applications, also two applications were developed. One is a standalone Java application and the other a Web-based application, both using the developed API.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa, para a obtenção do grau de Mestre em Engenharia Informática

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O objetivo deste estudo é conhecer uma vertente específica do perfil informacional dos alunos que cresceram na era digital, ditos nativos digitais, na utilização que fazem da Internet. Assim, pretende-se aferir os critérios que aplicam na avaliação das fontes de informação disponíveis na web na vertente da credibilidade. A análise dos dados obtidos, resultantes da aplicação de 195 questionários a alunos do 8º ao 12º, é enquadrada e sustentada por revisão da literatura acerca do conceito de credibilidade da informação.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Apresentam-se os resultados parcelares de um estudo destinado a promover um melhor conhecimento das estratégias que os jovens em idade escolar (12-18 anos) consideram relevantes para avaliar as fontes de informação disponíveis na Internet. Para o efeito, foi aplicado um inquérito distribuído a uma amostra de 195 alunos de uma escola do 3o ciclo e outra do ensino secundário de um concelho do distrito do Porto. São apresentados e discutidos os resultados acerca da perceção destes alunos quanto aos critérios a aplicar na avaliação das fontes de informação disponíveis na Internet, na vertente da credibilidade. Serão apresenta- das as práticas que os jovens declaram ter relativamente ao uso de critérios de autoria, originalidade, estrutura, atualidade e de comparação para avaliar a credibilidade das fontes de informação. Em complemento, estes resultados serão comparados e discutidos com as perceções que os mesmos inquiridos demonstram possuir relativamente aos elementos que compõem cada um destes critérios. A análise dos dados obtidos é enquadrada e sustentada numa revisão da literatura acerca do conceito de credibilidade, aplicado às fontes de informação disponíveis na Internet. São ainda abordados alguns tópicos relaciona- dos com a inclusão de estratégias de avaliação da credibilidade da informação digital no modelo Big6, um dos modelos de desenvolvimento de competências de literacia da informação mais conhecidos e utilizados nas bibliotecas escolares portuguesas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a hybrid maneuver for gradient search with multiple AUV's. The mission consists in following a gradient field in order to locate the source of a hydrothermal vent or underwater freshwater source. The formation gradient search exploits the environment structuring by the phenomena to be studied. The ingredients for coordination are the payload data collected by each vehicle and their knowledge of the behaviour of other vehicles and detected formation distortions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Master’s Degree Dissertation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As we move more closely to the practical concept of the Internet of Things and, our reliance on public and private APIs increases, web services and their related topics have become utterly crucial to the informatics community. However, the question about which style of web services would best solve a particular problem, can raise signi cant and multifarious debates. There can be found two implementation styles that highlight themselves: the RPC-oriented style represented by the SOAP protocol’s implementations and the hypermedia style, which is represented by the REST architectural style’s implementations. As we search examples of already established web services, we can nd a handful of robust and reliable public and private SOAP APIs, nevertheless, it seems that RESTful services are gaining popularity in the enterprise community. For the current generation of developers that work on informatics solutions, REST seems to represent a fundamental and straightforward alternative and even, a more deep-rooted approach than SOAP. But are they comparable? Do both approaches have each speci c best suitable scenarios? Such study is brie y carried out in the present document’s chapters, starting with the respective background study, following an analysis of the hypermedia approach and an instantiation of its architecture, in a particular case study applied in a BPM context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last few years, we have observed an exponential increasing of the information systems, and parking information is one more example of them. The needs of obtaining reliable and updated information of parking slots availability are very important in the goal of traffic reduction. Also parking slot prediction is a new topic that has already started to be applied. San Francisco in America and Santander in Spain are examples of such projects carried out to obtain this kind of information. The aim of this thesis is the study and evaluation of methodologies for parking slot prediction and the integration in a web application, where all kind of users will be able to know the current parking status and also future status according to parking model predictions. The source of the data is ancillary in this work but it needs to be understood anyway to understand the parking behaviour. Actually, there are many modelling techniques used for this purpose such as time series analysis, decision trees, neural networks and clustering. In this work, the author explains the best techniques at this work, analyzes the result and points out the advantages and disadvantages of each one. The model will learn the periodic and seasonal patterns of the parking status behaviour, and with this knowledge it can predict future status values given a date. The data used comes from the Smart Park Ontinyent and it is about parking occupancy status together with timestamps and it is stored in a database. After data acquisition, data analysis and pre-processing was needed for model implementations. The first test done was with the boosting ensemble classifier, employed over a set of decision trees, created with C5.0 algorithm from a set of training samples, to assign a prediction value to each object. In addition to the predictions, this work has got measurements error that indicates the reliability of the outcome predictions being correct. The second test was done using the function fitting seasonal exponential smoothing tbats model. Finally as the last test, it has been tried a model that is actually a combination of the previous two models, just to see the result of this combination. The results were quite good for all of them, having error averages of 6.2, 6.6 and 5.4 in vacancies predictions for the three models respectively. This means from a parking of 47 places a 10% average error in parking slot predictions. This result could be even better with longer data available. In order to make this kind of information visible and reachable from everyone having a device with internet connection, a web application was made for this purpose. Beside the data displaying, this application also offers different functions to improve the task of searching for parking. The new functions, apart from parking prediction, were: - Park distances from user location. It provides all the distances to user current location to the different parks in the city. - Geocoding. The service for matching a literal description or an address to a concrete location. - Geolocation. The service for positioning the user. - Parking list panel. This is not a service neither a function, is just a better visualization and better handling of the information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aquesta memòria tracta sobre el procediment de creació d’una aplicació web de notícies. Està dividida en 3 zones, una on usuaris amb permisos d’administració poden penjar notícies per ser visualitzades per tothom, una altra que s’hi accedeix si s’és usuari registrat i permet visualitzar noticies d’altres servidors mitjançant el format de dades RSS, i un tercer apartat de gestió administrativa, incorporar noves notícies, modificar-ne de presents o introduir noves pàgines web que continguin notícies. Els usuaris registrats podran seleccionar el diaris dels quals rebran informació, així com especificar quines temàtiques prefereixen en la cerca de notícies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PADICAT is the web archive created in 2005 in Catalonia (Spain ) by the Library of Catalonia (BC ) , the National Library of Catalonia , with the aim of collecting , processing and providing permanent access to the digital heritage of Catalonia . Its harvesting strategy is based on the hybrid model ( of massive harvesting . SPA top level domain ; selective compilation of the web site output of Catalan organizations; focused harvesting of public events) . The system provides open access to the whole collection , on the Internet . We consider necessary to complement the current search for new and visualization software with open source software tool, CAT ( Curator Archiving Tool) , composed by three modules aimed to effectively managing the processes of human cataloguing ; to publish directories where the digital resources and special collections ; and to offer statistical information of added value to end users. Within the framework of the International Internet Preservation Consortium meeting ( Vienna 2010) , the progress in the development of this new tool, and the philosophy that has motivated his design, are presented to the international community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El trabajo realizado se divide en dos bloques bien diferenciados, ambos relacionados con el análisis de microarrays. El primer bloque consiste en agrupar las condiciones muestrales de todos los genes en grupos o clústers. Estas agrupaciones se obtienen al aplicar directamente sobre la microarray los siguientes algoritmos de agrupación: SOM,PAM,SOTA,HC y al aplicar sobre la microarray escalada con PC y MDS los siguientes algoritmos: SOM,PAM,SOTA,HC y K-MEANS. El segundo bloque consiste en realizar una búsqueda de genes basada en los intervalos de confianza de cada clúster de la agrupación activa. Las condiciones de búsqueda ajustadas por el usuario se validan para cada clúster respecto el valor basal 0 y respecto el resto de clústers, para estas validaciones se usan los intervalos de confianza. Estos dos bloques se integran en una aplicación web ya existente, el applet PCOPGene, alojada en el servidor: http://revolutionresearch.uab.es.