990 resultados para Google Web Tolkit -- Avaluació


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most adequate approach for benchmarking web accessibility is manual expert evaluation supplemented by automatic analysis tools. But manual evaluation has a high cost and is impractical to be applied on large websites. In reality, there is no choice but to rely on automated tools when reviewing large web sites for accessibility. The question is: to what extent the results from automatic evaluation of a web site and individual web pages can be used as an approximation for manual results? This paper presents the initial results of an investigation aimed at answering this question. He have performed both manual and automatic evaluations of the accessibility of web pages of two sites and we have compared the results. In our data set automatically retrieved results could most definitely be used as an approximation manual evaluation results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[eng] The group of teaching innovation in the area of Botany (GIBAF), University of Barcelona (UB), is raised each year to design new accreditation activities under continuous evaluation framework. We present the experience carried out during the academic year 2008-09 in the course of Pharmaceutical Botany. The aim has been to involve students for a semester in the authorship of a tutored project immediately useful and of easy permanence, beyond its assessment proving usefulness. The Medicinal Plants Garden of the Monastery of Pedralbes has been used as a resource and a collaboration agreement has been signed between the UB faculty and the Institute of Culture of Barcelona. The students have developed the work using the Moodle platform CampusvirtualUB into five stages which included preparation of files by students that have been modified in some steps following the various feedbacks from teachers. At the beginning of the activity, students were provided with a complete schedule of activities, the schedule for its implementation, and a total of 18 forced-use library resources. Finally, through Google sites, a website has been implemented, allowing for a virtual tour of the garden, documenting by referenced literature 50 medicinal plants for their nomenclature, botanical description, distribution, uses historical, current and future) and toxicity. The result of the activity was presented at a public ceremony in the Monastery of Pedralbes and is available at: http://sites.google.com/site/jardimedievalpedralbes/ [spa] El grupo de innovación docente integrado por profesores del área de Botánica (GIBAF) de la Universidad de Barcelona (UB) se plantea cada curso el diseño de nuevas actividades acreditativas en el marco de la evaluación continuada. Se presenta la experiencia llevada a cabo durante el curso 2008-09 en la asignatura Botánica Farmacéutica. El objetivo ha sido implicar durante un semestre a los estudiantes en la autoría de un proyecto tutorizado de inmediata utilidad y clara perdurabilidad, más allá de su utilidad acreditativa. Como recurso se ha utilizado el Jardín de Plantas Medicinales del Monasterio de Pedralbes y se ha firmado un convenio de colaboración docente entre la UB y el Instituto de Cultura de Barcelona. Los estudiantes han realizado el trabajo utilizando la plataforma Moodle del Campus virtual de la UB en cinco etapas que han incluido la confección de unas fichas que se han ido modificando en función de las diversas retroacciones de los profesores. Al inicio de la actividad, se facilitó a los estudiantes el cronograma completo de la actividad, la pauta para su realización, así como un total de 18 recursos bibliográficos de uso obligado. Finalmente, a través de GoogleSites, se ha realizado una web que permite realizar un paseo virtual por el jardín, documentando de forma referenciada para las 50 plantas medicinales su nomenclatura, descripción botánica, distribución, usos (históricos, actuales y futuros) y toxicidad. El resultado de la actividad fue presentado en un acto público en el Monasterio de Pedralbes y puede consultarse en: http://sites.google.com/site/jardimedievalpedralbes/

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'objectiu principal d'aquest treball es estudiar la plataforma Google App Engine en la seva versió en Phyton. Per a estudiar i provar la plataforma, s'ha desenvolupat una aplicació web de programari lliure que permet la publicació de blogs de viatges. Aquesta aplicació està desenvolupada sobre Django i s'integra amb altres serveis de Google com l'autenticació amb els seus comptes Google Maps o Picasa Web Albums.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los rankings de productividad científica resultan cada vez más relevantes, tanto a nivel individual como institucional. Garantizar que se basan en información confiable y exhaustiva es, por tanto, importante. Este estudio indica que la posición de los individuos en esa clase de ranking puede cambiar sustancialmente cuando se consideran diversos indicadores bibliométricos internacionalmente reconocidos. Se usa, como ilustración, el caso de los diez profesores del área de ‘Personalidad, Evaluación y Tratamiento Psicológico’ consignados en el reciente análisis de Olivas-Ávila y Musi-Lechuga (Psicothema 2010. Vol. 22, nº 4, pp. 909-916).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este trabajo evalúa la usabilidad de la web de Renfe, comparándola con sitios de características similares, particularmente en Francia y Alemania. Para este análisis se efectúa un test de usuario con personas enmarcadas dentro de las características de los usuarios potenciales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aplicació mòbil desenvolupada en HTML5, CSS, AngularJS i PhoneGap per obtenir les activitats de la web www.festamajor.biz. L'aplicació mostra publicitat de Google en el dispositiu per tal d'obtenir ingressos per al seu manteniment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En aquest TFC s'intenta mostrar la necessitat d'implementar la usabilitat per a la millora de les webs, sobretot de les orientades als serveis, com ara una web educativa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tämän tutkielman aiheena on ammattikääntäjien tiedonhaku, kun käytettävissä on ainoastaan verkkolähteitä. Tutkimuksessa on tarkasteltu, mistä ja miten ammattikääntäjät etsivät tietoa internetistä kääntäessään lähtötekstiä englannista suomeen. Lisäksi tutkimuksen tarkoituksena on osoittaa, että tiedonhakutaidot ja lähdekriittisyys ovat käännöskompetensseja, joita tulisi sekä ylläpitää että opettaa osana kääntäjäkoulutusta. Tutkimuksen aineisto kerättiin empiirisesti käyttämällä kolmea metodia. Käännösprosessi ja sen aikana tapahtunut tiedonhaku tallennettiin käyttäen Camtasia-näyttövideointiohjelmaa ja Translog-II -näppäilyntallennusohjelmaa. Lisäksi tutkimukseen osallistuneet kääntäjät täyttivät kaksi kyselyä, joista ensimmäinen sisälsi taustatietokysymyksiä ja toinen itse prosessiin liittyviä retrospektiivisiä kysymyksiä. Kyselyt toteutettiin Webropol-kyselytyökalulla. Aineistoa kerättiin yhteensä viidestä koetilanteesta. Tutkimuksessa tarkasteltiin lähemmin kolmen ammattikääntäjän tiedon-hakutoimintoja erottelemalla käännösprosesseista ne tauot, joiden aikana kääntäjät etsivät tietoa internetistä. Käytettyjen verkkolähteiden osalta tutkimuksessa saatiin vastaavia tuloksia kuin aiemmissakin tutkimuksissa: eniten käytettyjä olivat Google, Wikipedia sekä erilaiset verkkosanakirjat. Tässä tutkimuksessa kuitenkin paljastui, että ammattikääntäjien tiedonhaun toimintamallit vaihtelevat riippuen niin kääntäjän erikoisalasta kuin hänen tiedonhakutaitojensa tasosta. Joutuessaan työskentelemään tutun työympäristönsä ja oman erikoisalansa ulkopuolella turvautuu myös osa ammattikääntäjistä alkeellisimpiin tiedonhakutekniikoihin, joita käännöstieteen opiskelijoiden on havaittu yleisesti käyttävän. Tulokset paljastivat myös, että tiedonhaku voi viedä jopa 70 prosenttia koko käännösprosessiin kuluvasta ajasta riippuen kääntäjän aiemmasta lähtötekstin aihepiiriin liittyvästä tietopohjasta ja tiedonhaun tehokkuudesta. Tutkimuksessa saatujen tulosten pohjalta voidaan sanoa, että myös ammattikääntäjien tulisi kehittää tiedonhakutaitojaan pitääkseen käännösprosessinsa tehokkaana. Lisäksi kääntäjien pitäisi muistaa arvioida kriittisesti käyttämiään tietolähteitä: lähdekritiikki on tarpeen erityisesti verkkolähteitä käytettäessä. Tästä syystä tiedonhakutaitoja ja lähdekriittisyyttä tulisikin opettaa ja harjoitella jo osana kääntäjäkoulutusta. Kääntäjien ei myöskään pidä jättää tiedonhakua pelkkien verkkolähteiden varaan, vaan jatkossakin käyttää hyväkseen niin painettuja tietolähteitä kuin myös henkilölähteitä.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examines the efficiency of search engine advertising strategies employed by firms. The research setting is the online retailing industry, which is characterized by extensive use of Web technologies and high competition for market share and profitability. For Internet retailers, search engines are increasingly serving as an information gateway for many decision-making tasks. In particular, Search engine advertising (SEA) has opened a new marketing channel for retailers to attract new customers and improve their performance. In addition to natural (organic) search marketing strategies, search engine advertisers compete for top advertisement slots provided by search brokers such as Google and Yahoo! through keyword auctions. The rationale being that greater visibility on a search engine during a keyword search will capture customers' interest in a business and its product or service offerings. Search engines account for most online activities today. Compared with the slow growth of traditional marketing channels, online search volumes continue to grow at a steady rate. According to the Search Engine Marketing Professional Organization, spending on search engine marketing by North American firms in 2008 was estimated at $13.5 billion. Despite the significant role SEA plays in Web retailing, scholarly research on the topic is limited. Prior studies in SEA have focused on search engine auction mechanism design. In contrast, research on the business value of SEA has been limited by the lack of empirical data on search advertising practices. Recent advances in search and retail technologies have created datarich environments that enable new research opportunities at the interface of marketing and information technology. This research uses extensive data from Web retailing and Google-based search advertising and evaluates Web retailers' use of resources, search advertising techniques, and other relevant factors that contribute to business performance across different metrics. The methods used include Data Envelopment Analysis (DEA), data mining, and multivariate statistics. This research contributes to empirical research by analyzing several Web retail firms in different industry sectors and product categories. One of the key findings is that the dynamics of sponsored search advertising vary between multi-channel and Web-only retailers. While the key performance metrics for multi-channel retailers include measures such as online sales, conversion rate (CR), c1ick-through-rate (CTR), and impressions, the key performance metrics for Web-only retailers focus on organic and sponsored ad ranks. These results provide a useful contribution to our organizational level understanding of search engine advertising strategies, both for multi-channel and Web-only retailers. These results also contribute to current knowledge in technology-driven marketing strategies and provide managers with a better understanding of sponsored search advertising and its impact on various performance metrics in Web retailing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cet article illustre la pertinence d’une théorie du document le représentant en trois dimensions complémentaires : forme, texte, médium. Deux exemples sont proposés : l’évolution de la conception du web par son inventeur Tim Berners-Lee qui passe progressivement d’une dimension à l’autre ; le classement des stratégies des principales firmes investissant le web du document, Amazon, Apple, Google et Facebook et privilégiant chaque fois une des dimensions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: An email information literacy program has been effective for over a decade at Université de Montréal’s Health Library. Students periodically receive messages highlighting the content of guides on the library’s website. We wish to evaluate, using Google Analytics, the effects of the program on specific webpage statistics. Using the data collected, we may pinpoint popular guides as well as others that need improvement. Methods: In the program, first and second-year medical (MD) or dental (DMD) students receive eight bi-monthly email messages. The DMD mailing list also includes graduate students and professors. Enrollment to the program is optional for MDs, but mandatory for DMDs. Google Analytics (GA) profiles have been configured for the libraries websites to collect visitor statistics since June 2009. The GA Links Builder was used to design unique links specifically associated with the originating emails. This approach allowed us to gather information on guide usage, such as the visitor’s program of study, duration of page viewing, number of pages viewed per visit, as well as browsing data. We also followed the evolution of clicks on GA unique links over time, as we believed that users may keep the library's emails and refer to them to access specific information. Results: The proportion of students who actually clicked the email links was, on average, less than 5%. MD and DMD students behaved differently regarding guide views, number of pages visited and length of time on the site. The CINAHL guide was the most visited for DMD students whereas MD students consulted the Pharmaceutical information guide most often. We noted that some students visited referred guides several weeks after receiving messages, thus keeping them for future reference; browsing to additional pages on the library website was also frequent. Conclusion: The mitigated success of the program prompted us to directly survey students on the format, frequency and usefulness of messages. The information gathered from GA links as well as from the survey will allow us to redesign our web content and modify our email information literacy program so that messages are more attractive, timely and useful for students.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Page 1. Web 2.0 Technologies for Education G. Santhosh Kumar Dept. Of Computer Science Cochin University Page 2. What is Internet? CUSAT is linked to this Web through 10 Mbps leased line connectivity Page 3. Size of the Web? GYWA = Sorted on Google, Yahoo!, Windows Live Search (Msn Search) and Ask YGWA = Sorted on Yahoo!, Google, Windows Live Search (Msn Search) and Ask www.worldwidewebsize.com Page 4. The Machine is Us/ing Us ■ http://in.youtube.com/watch?v=NLlGopyXT_g&feature=channel Page 5. ..

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wikiloc es un servicio web gratuito para visualizar y compartir rutas y puntos de interés GPS. Utilizando software libre y la API de Google Maps, Wikiloc hace la función de base de datos personal de localizaciones GPS. Desde cualquier acceso a Internet un usuario de GPS puede cargar sus datos GPS y al momento visualizar la ruta y waypoints con distinta cartografía de fondo, incluidos servidores de mapas externos WMS (Web Map Service) o descargarlo a Google Earth para ver en 3D. Paralelamente se muestra el perfil de altura, distancia, desniveles acumulados y las fotos o comentarios que el usuario quiera añadir

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Search engines - such as Google - have been characterized as "Databases of intentions". This class will focus on different aspects of intentionality on the web, including goal mining, goal modeling and goal-oriented search. Readings: M. Strohmaier, M. Lux, M. Granitzer, P. Scheir, S. Liaskos, E. Yu, How Do Users Express Goals on the Web? - An Exploration of Intentional Structures in Web Search, We Know'07 International Workshop on Collaborative Knowledge Management for Web Information Systems in conjunction with WISE'07, Nancy, France, 2007. [Web link] Readings: Automatic identification of user goals in web search, U. Lee and Z. Liu and J. Cho WWW '05: Proceedings of the 14th International World Wide Web Conference 391--400 (2005) [Web link]