974 resultados para Sites da Web acessíveis para deficientes


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este documento da las pautas básicas para incorporar vídeos incrustados en una página web codificada con HTML5, con un reproductor accesible. Así mismo, se da una introducción a la herramienta ccPlayer, reproductor de vídeos que está implementado como objeto SWF Flash y que permite añadir subtítulos. También para la herramienta JWPlayer que permite añadir subtítulos y audiodescripción.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

EMBnet is a consortium of collaborating bioinformatics groups located mainly within Europe (http://www.embnet.org). Each member country is represented by a 'node', a group responsible for the maintenance of local services for their users (e.g. education, training, software, database distribution, technical support, helpdesk). Among these services a web portal with links and access to locally developed and maintained software is essential and different for each node. Our web portal targets biomedical scientists in Switzerland and elsewhere, offering them access to a collection of important sequence analysis tools mirrored from other sites or developed locally. We describe here the Swiss EMBnet node web site (http://www.ch.embnet.org), which presents a number of original services not available anywhere else.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information about the genomic coordinates and the sequence of experimentally identified transcription factor binding sites is found scattered under a variety of diverse formats. The availability of standard collections of such high-quality data is important to design, evaluate and improve novel computational approaches to identify binding motifs on promoter sequences from related genes. ABS (http://genome.imim.es/datasets/abs2005/index.html) is a public database of known binding sites identified in promoters of orthologous vertebrate genes that have been manually curated from bibliography. We have annotated 650 experimental binding sites from 68 transcription factors and 100 orthologous target genes in human, mouse, rat or chicken genome sequences. Computational predictions and promoter alignment information are also provided for each entry. A simple and easy-to-use web interface facilitates data retrieval allowing different views of the information. In addition, the release 1.0 of ABS includes a customizable generator of artificial datasets based on the known sites contained in the collection and an evaluation tool to aid during the training and the assessment of motif-finding programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aquest treball presenta una proposta de web del Servei de Biblioteca i Documentació (SBD) de la Universitat de Lleida adaptada per a dispositius mòbils.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'objecte del treball és la realització d'un lloc web de comerç electrònic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most adequate approach for benchmarking web accessibility is manual expert evaluation supplemented by automatic analysis tools. But manual evaluation has a high cost and is impractical to be applied on large websites. In reality, there is no choice but to rely on automated tools when reviewing large web sites for accessibility. The question is: to what extent the results from automatic evaluation of a web site and individual web pages can be used as an approximation for manual results? This paper presents the initial results of an investigation aimed at answering this question. He have performed both manual and automatic evaluations of the accessibility of web pages of two sites and we have compared the results. In our data set automatically retrieved results could most definitely be used as an approximation manual evaluation results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate web-based information on bipolar disorder and to assess particular content quality indicators. METHODS: Two keywords, "bipolar disorder" and "manic depressive illness" were entered into popular World Wide Web search engines. Websites were assessed with a standardized proforma designed to rate sites on the basis of accountability, presentation, interactivity, readability and content quality. "Health on the Net" (HON) quality label, and DISCERN scale scores were used to verify their efficiency as quality indicators. RESULTS: Of the 80 websites identified, 34 were included. Based on outcome measures, the content quality of the sites turned-out to be good. Content quality of web sites dealing with bipolar disorder is significantly explained by readability, accountability and interactivity as well as a global score. CONCLUSIONS: The overall content quality of the studied bipolar disorder websites is good.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a research concerning the conversion of non-accessible web pages containing mathematical formulae into accessible versions through an OCR (Optical Character Recognition) tool. The objective of this research is twofold. First, to establish criteria for evaluating the potential accessibility of mathematical web sites, i.e. the feasibility of converting non-accessible (non-MathML) math sites into accessible ones (Math-ML). Second, to propose a data model and a mechanism to publish evaluation results, making them available to the educational community who may use them as a quality measurement for selecting learning material.Results show that the conversion using OCR tools is not viable for math web pages mainly due to two reasons: many of these pages are designed to be interactive, making difficult, if not almost impossible, a correct conversion; formula (either images or text) have been written without taking into account standards of math writing, as a consequence OCR tools do not properly recognize math symbols and expressions. In spite of these results, we think the proposed methodology to create and publish evaluation reports may be rather useful in other accessibility assessment scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aquest treball avalua la usabilitat i accessibilitat del portal web de l'Ajuntament de Sant Andreu de la Barca (Barcelona).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La difusión de la información se ha convertido en uno de los elementos críticos en los planes estratégicos de las instituciones universitarias. Inquietadas por su posición en los rankings internacionales que evalúan este tipo de instituciones, las universidades comienzan a preocuparse también por sus estrategias de difusión de información y sus webs institucionales. En este trabajo se abordan las funciones principales de las webs universitarias; el proceso de análisis, diseño e implementación en la creación de este tipo de sitios webs; los elementos mínimos que deben incluirse en una guía de estilo de web universitaria; y algunos de los principales estándares y referencias nacionales e internacionales de ese tipo de guías.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tavoitteena on tutkia sisällön räätälöintiä Internetissä. Yritysten tarjoama sisällön määrä WWW-sivuillaan on kasvanut räjähdysmäisesti. Räätälöinnin avulla asiakkaat saavat juuri haluamaansa ja tarvitsemaansa sisältöä. Räätälöinti edellyttää asiakkaiden profilointia. Asiakastietojen kerääminen aiheuttaa huolta yksityisyyden menettämisestä. Tutkimus toteutetaan case-tutkimuksena. Tutkimuksen kohteena on viisi yritystä, jotka toimivat sisällön tarjoajina. Tutkimus pohjautuu valmiiseen aineistoon sekä osallistuvaan havainnointiin kohde yrityksistä. Sisällön räätälöinnistä voidaan havaita neljä eri perus lähestymistapaa. Profilointi toteutetaan pääsääntöisesti joko asiakkaan itse antamien tietojen pohjalta tai havainnoimalla hänen käyttäytymistään WWW-sivulla. Tulevaisuudessa tarvitaan selkeät pelisäännöt asiakastietojen keräämiseen ja käyttämiseen. Asiakkaat haluavat räätälöityä sisältöä, mutta sisällön tarjoajien on saavutettava heidän luottamuksensa yksityisyyden suojasta. Luottamuksen merkitys kasvaa entisestään, kun räätälöintiä kehitetään pidemmälle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a prototype of an interactive web-GIS tool for risk analysis of natural hazards, in particular for floods and landslides, based on open-source geospatial software and technologies. The aim of the presented tool is to assist the experts (risk managers) in analysing the impacts and consequences of a certain hazard event in a considered region, providing an essential input to the decision-making process in the selection of risk management strategies by responsible authorities and decision makers. This tool is based on the Boundless (OpenGeo Suite) framework and its client-side environment for prototype development, and it is one of the main modules of a web-based collaborative decision support platform in risk management. Within this platform, the users can import necessary maps and information to analyse areas at risk. Based on provided information and parameters, loss scenarios (amount of damages and number of fatalities) of a hazard event are generated on the fly and visualized interactively within the web-GIS interface of the platform. The annualized risk is calculated based on the combination of resultant loss scenarios with different return periods of the hazard event. The application of this developed prototype is demonstrated using a regional data set from one of the case study sites, Fella River of northeastern Italy, of the Marie Curie ITN CHANGES project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Browsing the web has become one of the most important features in high end mobile phones and in the future more and more people will be using mobile phone for web browsing. Large touchscreens improve browsing experience but many web sites are designed to be used with a mouse. A touchscreen differs substantially from a mouse as a pointing device and therefore mouse emulation logic is required in the browsers to make more web sites usable. This Master's thesis lists the most significant cases where the differences of a mouse and a touchscreen affect web browsing. Five touchscreen mobile phones and their web browsers were evaluated to find out if and how these cases are handled in them. Also as a part of this thesis, a simple QtWebKit based mobile web browser with advanced mouse emulation model was implemented, aiming to solve all the problematic cases. The conclusion of this work is that it is feasible to emulate a mouse with a touchscreen and thus deliver good user experience in mobile web browsing. However, current highend touchscreen mobile phones have relatively underdeveloped mouse emulations in their web browsers and there is a lot to improve.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il s'agit d'un atelier donné dans le cadre des semaines de formation continue aux diététistes par le Département de nutrition de l'Université de Montréal en 2002. Après une brève introduction à Internet, on présente les caractéristiques spécifiques aux répertoires versus celles des moteurs de recherche, puis les principaux sites et moteurs de recherche utiles dans le domaine de la nutrition. La deuxième partie de l'atelier consiste à montrer comment utiliser la banque de données PubMed avec des exemples en nutrition.