930 resultados para Web Search Behaviour
Resumo:
[ES] El proyecto consiste en establecer la comunicación entre dos portales web a través de servicios web Restful implementados en PHP. Ambos portales están relacionados con el mundo del cine. El segundo de estos es, a grandes rasgos, una interfaz simplificada del anterior. El primer portal web está construido sobre Drupal 7, en este instalamos una serie de módulos que nos permiten gestionar el contenido que se quiere mostrar a los usuarios. Un usuario que no se identifica podrá navegar por todas las páginas del portal, identificarse y registrarse. Los privilegios que se le conceden a un usuario cuando se identifica son los de participar en el sistema de votación de las películas e interactuar con otros usuarios identificados a través de un sistema de comentarios. El usuario administrador, además, puede gestionar el contenido y a los usuarios identificados. El segundo portal está orientado al disfrute de la página a través de dispositivos móviles. Un usuario que no se ha identificado puede navegar por todas las áreas de éste del mismo modo que un usuario identificado. En este portal, a diferencia del anterior, no es posible registrarse para este tipo de actores. La diferencia entre el usuario no identificado y el identificado, en este caso, es que este último al visualizar el catálogo observará un descuento sobre cada película. Los servicios web, a través de peticiones GET y POST, proporcionarán a los usuarios una rica experiencia de navegación. Gracias a estos, en el segundo portal, podrán identificarse, obtener el catálogo de películas (además de ordenarlo y establecer filtros de búsqueda por género), y visualizar la ficha de las películas y directores. Todo esto sin necesidad de crear otra base de datos, tan solo intercambiando datos con el servidor.
Resumo:
[ES] Este Trabajo de Fin de Grado es un servicio basado en tecnologías web (PHP, HTML5, CSS, JQUERY y AJAX). El objetivo principal es ofrecer un servicio de creación y gestión de actas para el Ayuntamiento de Las Palmas de Gran Canaria. Para ello, consta de dos módulos principales, uno para “crear actas” y otro para “editar actas”. La aplicación consta de dos partes. Una primera parte desarrollada por mí, que consiste en primer lugar en todas las reuniones que fueron necesarias con el personal del Ayuntamiento de Las Palmas de Gran Canaria para entender sus necesidades y cómo poder afrontarlas como desarrollador. Y en segundo lugar, me he encargado de la elaboración y la estructura de la página web, mediante la generación de los distintos ficheros con contenido HTML, en la interconexión de estos ficheros y en el paso de parámetros entre dichos ficheros mediante las distintas herramientas (JQUERY, AJAX), así como también he dotado a la web de todo el contenido JavaScript necesario. En este apartado también se encuentra la tarea de realizar un módulo de búsqueda y un módulo para mostrar las actas ya acabadas. El de búsqueda contiene un formulario con un campo de búsqueda y busca las coincidencias dentro de todos los ficheros que se han generado con la aplicación. También muestra un link para abrir ese fichero desde el navegador. Como aportación adicional también me he encargado de la configuración y generación de las tablas necesarias de la base de datos para el funcionamiento de la aplicación.
Resumo:
[ES] Este Trabajo de Fin de Grado es un servicio basado en tecnologías web. El objetivo principal es ofrecer un servicio de creación y gestión de actas para el Ayuntamiento de Las Palmas de Gran Canaria. Para ello, consta de dos módulos principales, uno para “crear actas” y otro para “editar actas”. También se ha desarrollado otro módulo llamado plantillas donde se genera un PDF a partir de una plantilla preestablecida. Esta aplicación ha sido dividida en diferentes partes. La primera parte consistió en generar todas las configuraciones de base de datos necesarias para el funcionamiento de la aplicación. Después generamos todos los ficheros HTML y las interconexiones entre ellos. Finalmente, dotamos a esos HTML estáticos de un estilo mucho más claro y organizado, dando a la aplicación una apariencia mucho más bonita. Una vez finalizada la parte frontal de la aplicación, empezamos a implementar la lógica detrás de la aplicación. Los módulos de “crear” y “editar” se hicieron utilizando formularios HTML y combinando la información obtenida de esos formularios con unas plantillas HTML generadas por nosotros. Toda esa información obtenida de los formularios se guarda en unos ficheros .txt para poder ser utilizados por el módulo editar. El módulo de plantillas nos muestra un editor HTML rellenado con una plantilla que ha sido previamente seleccionada por el usuario. Los ficheros pdf de este módulo no pueden editados con posterioridad por lo que no se generan ficheros .txt. Por último, hay dos módulos que nos permiten ver todas las actas generadas por la aplicación. El primero de los dos módulos es el módulo de búsqueda, que nos permite buscar una palabra clave dentro de todos los ficheros pdf. El otro módulo nos muestra todas las actas que han sido marcadas como “cerradas”. Esta aplicación ha sido diseñada de forma modular, de manera que podemos añadir o quitar módulos de manera sencilla.
Resumo:
This thesis aims at investigating methods and software architectures for discovering what are the typical and frequently occurring structures used for organizing knowledge in the Web. We identify these structures as Knowledge Patterns (KPs). KP discovery needs to address two main research problems: the heterogeneity of sources, formats and semantics in the Web (i.e., the knowledge soup problem) and the difficulty to draw relevant boundary around data that allows to capture the meaningful knowledge with respect to a certain context (i.e., the knowledge boundary problem). Hence, we introduce two methods that provide different solutions to these two problems by tackling KP discovery from two different perspectives: (i) the transformation of KP-like artifacts to KPs formalized as OWL2 ontologies; (ii) the bottom-up extraction of KPs by analyzing how data are organized in Linked Data. The two methods address the knowledge soup and boundary problems in different ways. The first method provides a solution to the two aforementioned problems that is based on a purely syntactic transformation step of the original source to RDF followed by a refactoring step whose aim is to add semantics to RDF by select meaningful RDF triples. The second method allows to draw boundaries around RDF in Linked Data by analyzing type paths. A type path is a possible route through an RDF that takes into account the types associated to the nodes of a path. Then we present K~ore, a software architecture conceived to be the basis for developing KP discovery systems and designed according to two software architectural styles, i.e, the Component-based and REST. Finally we provide an example of reuse of KP based on Aemoo, an exploratory search tool which exploits KPs for performing entity summarization.
Resumo:
Ein Tag ohne Internet ist für viele kaum vorstellbar. Das Spektrum der Internetnutzer ist breiter geworden und damit sind die Ansprüche an die Websites massiv angestiegen. Die Entscheidung auf einer Website zu bleiben oder auf einer anderen zu suchen fällt innerhalb von wenigen Sekunden. Diese Entscheidung ist sowohl vom Website-Design als auch von dem dargestellten Inhalt abhängig. Die Auswertung, wie schnell der Benutzer Online-Informationen finden und wie einfach er sie verstehen kann, ist die Aufgabe von Web-Usability-Testing. Für das Finden und Verstehen von Informationen sind die computertechnischen zusammen mit den linguistischen Aspekten zuständig. In der Usability-Forschung liegt jedoch der Fokus bislang weitgehend auf der Bewertung der computer¬linguistischen und ästhetischen Aspekte der Websites. In den Hintergrund gedrängt wurden dabei die linguistischen Aspekte. Im Vergleich sind diese weniger systematisch erforscht und in Usability-Richtlinien kaum zu finden. Stattdessen stößt man überwiegend auf allgemeine Empfehlungen. Motiviert davon hat die vorliegende Arbeit das Ziel, Die Web-Usability systematisch sowohl aus linguistischer als auch aus formaler Sicht zu erforschen. Auf linguistischer Ebene wurde in Anlehnung an die Zeichentheorie von Morris die Web-Usability analysiert und der Begriff Linguistische Web-Usability eingeführt. Auf Basis dieser Analyse sowie einer literaturstudie ‘literature review’ mehrerer Usability-Richtlinien wurde ein Kriterienkatalog entwickelt. Bei der Umsetzung dieses Kriterienkatalogs im Rahmen einer Usability-Studie wurde die Website der Universität Johannes Gutenberg-Universität Mainz (JGU) im Usability-Labor unter Anwendung der Methode Eye-Tracking zusammen mit der Think-Aloud-Methode und der Retrospective-Think-Aloud-Methode getestet. Die empirischen Ergebnisse zeigen, dass die linguistischen Usability-Probleme genau wie die formalen die Benutzer hindern, die gesuchten Informationen zu finden, oder zumindest ihre Suche verlangsamen. Dementsprechend sollten die linguistischen Perspektiven in die Usability-Richtlinien miteinbezogen werden.
Resumo:
La presente tesi di laurea si concentra sulla localizzazione in inglese di varie sezioni del nuovo sito web della Pinacoteca di Brera. Il progetto di localizzazione è stato contestualizzato da un lato all’interno della letteratura sulla comunicazione museale, e dall’altro sulla comunicazione web, per poter avanzare proposte di miglioramento alla luce di ricerche nel campo della SEO (Search Engine Optimization). Lo studio della comunicazione museale si è arricchito grazie all’esperienza di documentazione presso la University of Leicester (UK). La tesi mira a porre le basi per la produzione di contenuti museali adatti ad una lettura sul web, in modo da offrire non solo una traduzione ben fatta dal punto di vista linguistico e culturale, ma anche facilmente fruibile per un utente online e reperibile attraverso motori di ricerca. L’elaborato intende fornire ai musei italiani alcuni spunti di riflessione circa possibili miglioramenti delle proprie piattaforme online grazie alla localizzazione e ad un’analisi approfondita dei contenuti web secondo principi di usabilità e visibilità. Il capitolo 1 introduce la letteratura sugli studi museali, prestando particolare attenzione alla comunicazione. Il capitolo 2 fornisce una panoramica generale sul web: vengono suggerite buone pratiche di web writing, analizzate le strategie di SEO per migliorare la visibilità dei siti e delineato le principali caratteristiche del processo di localizzazione. Il capitolo 3 riunisce i due universi finora esplorati individualmente, ovvero i musei e il web, concentrandosi sulla comunicazione online dei musei e concludendo con uno schema di valutazione dei siti dei musei. Il capitolo 4 applica le strategie precedentemente discusse al caso specifico della Pinacoteca di Brera, concentrandosi sulla valutazione del sito, sulla localizzazione di alcune sezioni e sulla proposta di strategie SEO. Infine, il capitolo 5 tira le fila dell’intero lavoro mettendo in evidenza i principali risultati ottenuti.
Resumo:
Web-scale knowledge retrieval can be enabled by distributed information retrieval, clustering Web clients to a large-scale computing infrastructure for knowledge discovery from Web documents. Based on this infrastructure, we propose to apply semiotic (i.e., sub-syntactical) and inductive (i.e., probabilistic) methods for inferring concept associations in human knowledge. These associations can be combined to form a fuzzy (i.e.,gradual) semantic net representing a map of the knowledge in the Web. Thus, we propose to provide interactive visualizations of these cognitive concept maps to end users, who can browse and search the Web in a human-oriented, visual, and associative interface.
Resumo:
BACKGROUND: Many users search the Internet for answers to health questions. Complementary and alternative medicine (CAM) is a particularly common search topic. Because many CAM therapies do not require a clinician's prescription, false or misleading CAM information may be more dangerous than information about traditional therapies. Many quality criteria have been suggested to filter out potentially harmful online health information. However, assessing the accuracy of CAM information is uniquely challenging since CAM is generally not supported by conventional literature. OBJECTIVE: The purpose of this study is to determine whether domain-independent technical quality criteria can identify potentially harmful online CAM content. METHODS: We analyzed 150 Web sites retrieved from a search for the three most popular herbs: ginseng, ginkgo and St. John's wort and their purported uses on the ten most commonly used search engines. The presence of technical quality criteria as well as potentially harmful statements (commissions) and vital information that should have been mentioned (omissions) was recorded. RESULTS: Thirty-eight sites (25%) contained statements that could lead to direct physical harm if acted upon. One hundred forty five sites (97%) had omitted information. We found no relationship between technical quality criteria and potentially harmful information. CONCLUSIONS: Current technical quality criteria do not identify potentially harmful CAM information online. Consumers should be warned to use other means of validation or to trust only known sites. Quality criteria that consider the uniqueness of CAM must be developed and validated.
Resumo:
OBJECTIVES: To determine the characteristics of popular breast cancer related websites and whether more popular sites are of higher quality. DESIGN: The search engine Google was used to generate a list of websites about breast cancer. Google ranks search results by measures of link popularity---the number of links to a site from other sites. The top 200 sites returned in response to the query "breast cancer" were divided into "more popular" and "less popular" subgroups by three different measures of link popularity: Google rank and number of links reported independently by Google and by AltaVista (another search engine). MAIN OUTCOME MEASURES: Type and quality of content. RESULTS: More popular sites according to Google rank were more likely than less popular ones to contain information on ongoing clinical trials (27% v 12%, P=0.01 ), results of trials (12% v 3%, P=0.02), and opportunities for psychosocial adjustment (48% v 23%, P<0.01). These characteristics were also associated with higher number of links as reported by Google and AltaVista. More popular sites by number of linking sites were also more likely to provide updates on other breast cancer research, information on legislation and advocacy, and a message board service. Measures of quality such as display of authorship, attribution or references, currency of information, and disclosure did not differ between groups. CONCLUSIONS: Popularity of websites is associated with type rather than quality of content. Sites that include content correlated with popularity may best meet the public's desire for information about breast cancer.
Resumo:
Complementary and alternative medicine (CAM) use is growing rapidly. As CAM is relatively unregulated, it is important to evaluate the type and availability of CAM information. The goal of this study is to deter-mine the prevalence, content and readability of online CAM information based on searches for arthritis, diabetes and fibromyalgia using four common search engines. Fifty-eight of 599 web pages retrieved by a "condition search" (9.6%) were CAM-oriented. Of 216 CAM pages found by the "condition" and "condition + herbs" searches, 78% were authored by commercial organizations, whose pur-pose involved commerce 69% of the time and 52.3% had no references. Although 98% of the CAM information was intended for consumers, the mean read-ability was at grade level 11. We conclude that consumers searching the web for health information are likely to encounter consumer-oriented CAM advertising, which is difficult to read and is not supported by the conventional literature.
Resumo:
Digital TV offers of 200 channels and 500 video-on-demand films, podcasting, mobile television, a new web blog being created every two seconds - these are some of the factual elements depicting contemporary audiovisual media in the digital environment. The present article looks into some of these technological advances and sketches their implications for the markets of media content, in particular as newly emerging patterns of consumer and business behaviour are concerned. Ultimately, it puts forward the question of whether the existing audiovisual media regulatory models, which are still predominantly analogue-based, have been rendered obsolete by the transformed (and continually transforming) digital environment.
Resumo:
For the main part, electronic government (or e-government for short) aims to put digital public services at disposal for citizens, companies, and organizations. To that end, in particular, e-government comprises the application of Information and Communications Technology (ICT) to support government operations and provide better governmental services (Fraga, 2002) as possible with traditional means. Accordingly, e-government services go further as traditional governmental services and aim to fundamentally alter the processes in which public services are generated and delivered, after this manner transforming the entire spectrum of relationships of public bodies with its citizens, businesses and other government agencies (Leitner, 2003). To implement this transformation, one of the most important points is to inform the citizen, business, and/or other government agencies faithfully and in an accessible way. This allows all the partaking participants of governmental affairs for a transition from passive information access to active participation (Palvia and Sharma, 2007). In addition, by a corresponding handling of the participants' data, a personalization towards these participants may even be accomplished. For instance, by creating significant user profiles as a kind of participants' tailored knowledge structures, a better-quality governmental service may be provided (i.e., expressed by individualized governmental services). To create such knowledge structures, thus known information (e.g., a social security number) can be enriched by vague information that may be accurate to a certain degree only. Hence, fuzzy knowledge structures can be generated, which help improve governmental-participants relationship. The Web KnowARR framework (Portmann and Thiessen, 2013; Portmann and Pedrycz, 2014; Portmann and Kaltenrieder, 2014), which I introduce in my presentation, allows just all these participants to be automatically informed about changes of Web content regarding a- respective governmental action. The name Web KnowARR thereby stands for a self-acting entity (i.e. instantiated form the conceptual framework) that knows or apprehends the Web. In this talk, the frameworks respective three main components from artificial intelligence research (i.e. knowledge aggregation, representation, and reasoning), as well as its specific use in electronic government will be briefly introduced and discussed.
Resumo:
Outside lobbying is a key strategy for social movements, interest groups and political parties for mobilising public opinion through the media in order to pressure policymakers and influence the policymaking process. Relying on semi-structured interviews and newspaper content analysis in six Western European countries, this article examines the use of four outside lobbying strategies – media-related activities, informing (about) the public, mobilisation and protest – and the amount of media coverage they attract. While some strategies are systematically less pursued than others, we find variation in their relative share across institutional contexts and actor types. Given that most of these differences are not accurately mirrored in the media, we conclude that media coverage is only loosely connected to outside lobbying behaviour, and that the media respond differently to a given strategy when used by different actors. Thus, the ability of different outside lobbying strategies to generate media coverage critically depends on who makes use of them.
Resumo:
Software developers are often unsure of the exact name of the method they need to use to invoke the desired behavior in a given context. This results in a process of searching for the correct method name in documentation, which can be lengthy and distracting to the developer. We can decrease the method search time by enhancing the documentation of a class with the most frequently used methods. Usage frequency data for methods is gathered by analyzing other projects from the same ecosystem - written in the same language and sharing dependencies. We implemented a proof of concept of the approach for Pharo Smalltalk and Java. In Pharo Smalltalk, methods are commonly searched for using a code browser tool called "Nautilus", and in Java using a web browser displaying HTML based documentation - Javadoc. We developed plugins for both browsers and gathered method usage data from open source projects, in order to increase developer productivity by reducing method search time. A small initial evaluation has been conducted showing promising results in improving developer productivity.
Resumo:
Background The RCSB Protein Data Bank (PDB) provides public access to experimentally determined 3D-structures of biological macromolecules (proteins, peptides and nucleic acids). While various tools are available to explore the PDB, options to access the global structural diversity of the entire PDB and to perceive relationships between PDB structures remain very limited. Methods A 136-dimensional atom pair 3D-fingerprint for proteins (3DP) counting categorized atom pairs at increasing through-space distances was designed to represent the molecular shape of PDB-entries. Nearest neighbor searches examples were reported exemplifying the ability of 3DP-similarity to identify closely related biomolecules from small peptides to enzyme and large multiprotein complexes such as virus particles. The principle component analysis was used to obtain the visualization of PDB in 3DP-space. Results The 3DP property space groups proteins and protein assemblies according to their 3D-shape similarity, yet shows exquisite ability to distinguish between closely related structures. An interactive website called PDB-Explorer is presented featuring a color-coded interactive map of PDB in 3DP-space. Each pixel of the map contains one or more PDB-entries which are directly visualized as ribbon diagrams when the pixel is selected. The PDB-Explorer website allows performing 3DP-nearest neighbor searches of any PDB-entry or of any structure uploaded as protein-type PDB file. All functionalities on the website are implemented in JavaScript in a platform-independent manner and draw data from a server that is updated daily with the latest PDB additions, ensuring complete and up-to-date coverage. The essentially instantaneous 3DP-similarity search with the PDB-Explorer provides results comparable to those of much slower 3D-alignment algorithms, and automatically clusters proteins from the same superfamilies in tight groups. Conclusion A chemical space classification of PDB based on molecular shape was obtained using a new atom-pair 3D-fingerprint for proteins and implemented in a web-based database exploration tool comprising an interactive color-coded map of the PDB chemical space and a nearest neighbor search tool. The PDB-Explorer website is freely available at www.cheminfo.org/pdbexplorer and represents an unprecedented opportunity to interactively visualize and explore the structural diversity of the PDB.