978 resultados para Google Store
Resumo:
Las revistas académico-científicas son uno de los principales canales de comunicación y difusión de los resultados de investigación. Sin embargo no todas tienen el mismo prestigio y grado de influencia en la comunidad científica. De su calidad y visibilidad dependen, en gran medida, las posibilidades de que un trabajo sea conocido, leído y citado. Esta investigación se propone determinar la visibilidad en Google Scholar de un conjunto de revistas latinoamericanas de Bibliotecología y Ciencia de la Información, y clasificarlas según su grado de visibilidad. Utilizando la aplicación Publish or Perish para recuperar las citas de las revistas se calcularon indicadores basados en la citación. Se combinaron técnicas de análisis multivariado de datos para realizar agrupamientos de revistas según su grado de visibilidad, y se elaboraron representaciones gráficas que facilitan la interpretación de los resultados y la identificación de los grupos de revistas con alta, media y baja visibilidad.
Resumo:
Aquest treball final de carrera consisteix en la realització d'un estudi de la usabilitat i accessibilitat de tres gestors de medis multimèdia: iTunes, Google Music i Amazon Cloud Player. A través d'observacions contextuals, avaluacions heurístiques i tests d'usuaris he realitzat una anàlisi comparativa. Per tal d'esquematitzar tota aquesta informació he confeccionat una taula amb els punts forts i punts febles de cada aplicació per concloure amb la proposta de millores per a cada un dels gestors de medis.
Resumo:
This paper describes the result of a research about diverse areas of the information technology world applied to cartography. Its final result is a complete and custom geographic information web system, designed and implemented to manage archaeological information of the city of Tarragona. The goal of the platform is to show on a web-focused application geographical and alphanumerical data and to provide concrete queries to explorate this. Various tools, between others, have been used: the PostgreSQL database management system in conjunction with its geographical extension PostGIS, the geographic server GeoServer, the GeoWebCache tile caching, the maps viewer and maps and satellite imagery from Google Maps, locations imagery from Google Street View, and other open source libraries. The technology has been chosen from an investigation of the requirements of the project, and has taken great part of its development. Except from the Google Maps tools which are not open source but are free, all design has been implemented with open source and free tools.
Resumo:
L'era 2.0 ha arribat als nostres dies i amb això, les xarxes socials ja formen part de la vida diària de molts habitants del planeta. Cada dia, milions de persones es connecten i interactuen a través d'elles amb la finalitat d'entretenir-se i informar-se. Tot això ha fet que molts mitjans de comunicació tradicionals hagin hagut de renovar-se per oferir als usuaris un servei que anys enrere ni es plantejaven i que en alguns casos els ha agafat desprevinguts. Per això, el present Treball Final de Carrera analitza l'activitat dels mitjans El País, El Mundo, La Vanguardia i El Periódico de Catalunya a les xarxes socials Google+ i LinkedIn. La popularitat, la interactivitat i la influència seran factors determinants per veure quin és el mitjà que realitza un millor ús d'aquestes xarxes socials, minoritàries, que guanyen terreny en el dia a dia.
Resumo:
L'objectiu principal d'aquest treball es estudiar la plataforma Google App Engine en la seva versió en Phyton. Per a estudiar i provar la plataforma, s'ha desenvolupat una aplicació web de programari lliure que permet la publicació de blogs de viatges. Aquesta aplicació està desenvolupada sobre Django i s'integra amb altres serveis de Google com l'autenticació amb els seus comptes Google Maps o Picasa Web Albums.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
Store-operated Ca(2+) channels (SOCs) are voltage-independent Ca(2+) channels activated upon depletion of the endoplasmic reticulum Ca(2+) stores. Early studies suggest the contribution of such channels to Ca(2+) homeostasis in insulin-secreting pancreatic β-cells. However, their composition and contribution to glucose-stimulated insulin secretion (GSIS) remains unclear. In this study, endoplasmic reticulum Ca(2+) depletion triggered by acetylcholine (ACh) or thapsigargin stimulated the formation of a ternary complex composed of Orai1, TRPC1, and STIM1, the key proteins involved in the formation of SOCs. Ca(2+) imaging further revealed that Orai1 and TRPC1 are required to form functional SOCs and that these channels are activated by STIM1 in response to thapsigargin or ACh. Pharmacological SOCs inhibition or dominant negative blockade of Orai1 or TRPC1 using the specific pore mutants Orai1-E106D and TRPC1-F562A impaired GSIS in rat β-cells and fully blocked the potentiating effect of ACh on secretion. In contrast, pharmacological or dominant negative blockade of TRPC3 had no effect on extracellular Ca(2+) entry and GSIS. Finally, we observed that prolonged exposure to supraphysiological glucose concentration impaired SOCs function without altering the expression levels of STIM1, Orai1, and TRPC1. We conclude that Orai1 and TRPC1, which form SOCs regulated by STIM1, play a key role in the effect of ACh on GSIS, a process that may be impaired in type 2 diabetes.
Resumo:
El TFM ha consistit en participar i conèixer de prop tots el passos de desenvolupament d’una aplicació: elaboració d’un guió, realització de la part gràfica, programació i màrqueting. Per aquest motiu he realitzat unes hores de pràctiques a l'empresa Factoria Interactive SL. Durant aquest periòde vaig realitzar diversos guions per diferents apps relacionades amb continguts matemàtics i vaig participar en el programació de l'app "Dress up". I l'últim pas va consistir en penjar l'app a la botiga de Google Play. Per poder accedir a l'aplicació cal seguir el següent enllaç de Google Play: https://play.google.com/store/apps/details?id=eu.lafactoria.dressup&hl=es
Resumo:
The vast majority of users don’t seek results beyond the second page offered by the search engine, so if a site fails to be among the top 20 (second page), it says that this page does not have good SEO and, therefore, is not visible to the user. The overall objective of this project is to conduct a study to discover the factors that determine (or not) the positioning of websites in a search engine.
Resumo:
In a micro-founded model, we derive novel incentives for a monopoly search engine to distort its organic and its sponsored results on searches for online content and offline products. Distorting organic results towards content publishers with less effective display advertising and/or distorting sponsored results towards higher margin merchants (by underweighting consumer relevance in search auctions) increase per capita revenues but lower participation. The interplay of these incentives determines search bias and welfare. We also characterize how the welfare consequences of integration into display advertising, as intermediary or publisher, depend on asymmetries, monopolization and targeting.