958 resultados para WEB (Computer program language)


Relevância:

40.00% 40.00%

Publicador:

Resumo:

En aquest treball s'analitza la manera d'expressar-se dels joves al fòrum del web adolescents.cat des de dos punts de vista. Per una banda, s'analitza la productivitat i recursivitat dels diversos processos de formació d'argot. Per altra banda, s'estudien les alteracions en el codi d'escriptura amb finalitats estilístiques i pràctiques.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This documents sums up a projectaimed at building a new web interfaceto the Apertium machine translationplatform, including pre-editing andpost-editing environments. It containsa description of the accomplished workon this project, as well as an overviewof possible evolutions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The objective of this article is to systematically assess the quality of web-based information in French language on the alcohol dependence. The authors analysed, using a standardised pro forma, the 20 most highly ranked pages identified by 3 common internet search engines using 2 keywords. Results show that a total of 45 sites were analysed. The authors conclude that the overall quality of the sites was relatively poor, especially for the description of possible treatments, however with a wide variability. Content quality was not correlated with other aspects of quality such as interactivity, aesthetic or accountability.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Aquest treball avalua la usabilitat i accessibilitat del portal web de l'Ajuntament de Sant Andreu de la Barca (Barcelona).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

El projecte consisteix en la creació d'un visor web SIG de dades meteorològiques. Esta eina permetrà els usuaris accedir a una web on podran consultar i visualitzar diferents dades cartogràfiques i meteorològiques. A més a més, podran superposar en un mapa diferents conjunts de dades a analitzar, facilitant per tant la feina dels meteoròlegs. Es podrà també accedir a dades de diferents dates i visualitzar un històric de la situació del clima, facilitant la tasca de crear models històrics i permetent veure com ha evolucionat el clima al llarg del temps.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Este trabajo evalúa la usabilidad de la web de Renfe, comparándola con sitios de características similares, particularmente en Francia y Alemania. Para este análisis se efectúa un test de usuario con personas enmarcadas dentro de las características de los usuarios potenciales.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

O objetivo principal desse artigo é analisar o jogo social CityVille criado pela empresa Zynga, uma das últimas tendências do Facebook encontrado e disponível na web. A escolha do corpus game CityVille deve-se ao interesse de buscar compreender o por que esse jogo obteve tamanho sucesso e seja atualmente um dos jogos sociais/digitais de maior destaque e adeptos da rede social Facebook. Busca-se dessa maneira depreender de que maneira os jogos sociais tem evoluído e transformado as relações comunicacionais entre os usuários da rede. As redes sociais têm se tornando cada vez mais importantes e estão vinculadas a vida das pessoas. Com o desenvolvimento da linguagem digital, a forma com que as pessoas passaram a interagir se transforma, pois essas se comunicam através do computador em tempo real. O estudo busca fazer uma análise plural do jogo CityVille destacando distintos pontos de vista do jogo social. Propomos verificar as relações entre: o uso e os usuários, e a tecnologia e o conteúdo do jogo. Nas conclusões explicitaremos quais serão as direções possíveis do futuro dos jogos sociais da web.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Internet-palvelujen määrä kasvaa jatkuvasti. Henkilöllä on yleensä yksi sähköinen identiteetti jokaisessa käyttämässään palvelussa. Autentikointitunnusten turvallinen säilytys käy yhä vaikeammaksi, kun niitä kertyy yhdet jokaisesta uudesta palvelurekisteröitymisestä. Tämä diplomityö tarkastelee ongelmaa ja ratkaisuja sekä palvelulähtöisestä että teknisestä näkökulmasta. Palvelulähtöisen identiteetinhallinnan liiketoimintakonsepti ja toteutustekniikat – kuten single sign-on (SSO) ja Security Assertion Markup Language (SAML) – käydään läpi karkeiden esimerkkien avulla sekä tutustuen Nokia Account -hankkeessa tuotetun ratkaisun konseptiin ja teknisiin yksityiskohtiin. Nokia Account -palvelun ensimmäisen version toteutusta analysoidaan lopuksi identiteetinhallintapalveluiden suunnitteluperiaatteita ja vaatimuksia vasten.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

CoCo is a collaborative web interface for the compilation of linguistic resources. In this demo we are presenting one of its possible applications: paraphrase acquisition.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

En el proyecto se describe el proceso de una auditoría web que servirá para detectar y resolver las vulnerabilidades tanto del código de la aplicación como de la configuración del sistema que la aloja.