992 resultados para digitization, statistics, Google Analytics


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Negli ultimi vent'anni con lo sviluppo di Internet, il modo di comunicare tra le persone �è totalmente cambiato. Grazie a Internet si sono ridotte le distanze e soprattutto tramite i siti web le aziende hanno una propria vetrina sul mondo sempre accessibile. Tutto ci�ò ha portato a nuovi comportamenti da parte dei consumatori che divengono sempre pi�u esigenti nella vastità di informazioni presenti sul Web. Perciò è necessario che le web companies riescano a produrre website efficienti e usabili per favorire l'interazione con l'utente. Inoltre il web ha avuto una rapida espansione per quanto concerne le metodologie di sviluppo e analisi del comportamento del consumatore. Si cercano sempre nuovi spunti per poter acquisire quello che �è il percorso di un utente affinché porti a termine una determinata azione nel proprio dominio. Per questo motivo, oltre agli strumenti gi�à consolidati come il riempimento di questionari o il tracking per mezzo di piattaforme come Google Analytics, si �è pensato di andare oltre e cercare di analizzare ancora pi�u a fondo il "consumAttore". Grazie ad un eye-tracker �è possibile riconoscere quelli che sono i modelli cognitivi che riguardano il percorso di ricerca, valutazione e acquisto di un prodotto o una call to action, e come i contenuti di una web application influenzano l'attenzione e la user experience. Pertanto l'obiettivo che si pone questo studio �è quello di poter misurare l'engagement della navigazione utente di una web application e, nel caso fosse necessario, ottimizzare i contenuti al suo interno. Per il rilevamento delle informazioni necessarie durante l'esperimento, mi sono servito di uno strumento a supporto delle decisioni, ovvero un eye-tracker e della successiva somministrazione di questionari.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las bibliotecas universitarias recopilan, de manera rutinaria estadísticas sobre el uso de sus colecciones impresas y de la actividad in situ. Paralelamente y de manera sostenida, han ido incorporando recursos y servicios electrónicos, lo que ha motivado la elaboración de normas internacionales que definen indicadores que permiten medir su uso, no obstante contar con un software estándar es aún un asunto pendiente. Por otro lado, para medir la actividad de un sitio web existen varios programas gratuitos y de código abierto. Este trabajo tiene como objetivo determinar si los softwares de analítica web gratuitos para sitios web AWStats, Google Analytics y Piwik, pueden utilizarse para evaluar el uso de recursos y servicios electrónicos, conforme a los indicadores propuestos por las normas ANSI/NISO Z39.7-2013, ISO 2789:2003, ISO 20983:2003, BS ISO 11620:2008, EMIS, Counter e ICOLC. Para tales efectos, fueron utilizados para realizar el análisis de esta investigación sitio web y el catálogo en línea de la Biblioteca Florentino Ameghino, Biblioteca Central de la Facultad de Ciencias Naturales y Museo de la Universidad Nacional de la Plata, Argentina. Los resultados reflejan las características de los indicadores, el software y el caso de estudio. Estas características son abordadas en las conclusiones con el fin de darle contexto y perspectiva a la respuesta de la pregunta de si es viable medir el uso de recursos y servicios electrónicos de una biblioteca universitaria por medio de programas estadísticos para sitios web

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La Analítica Web supone hoy en día una tarea ineludible para las empresas de comercio electrónico, ya que les permite analizar el comportamiento de sus clientes. El proyecto Europeo SME-Ecompass tiene como objetivo desarrollar herramientas avanzadas de analítica web accesibles para las PYMES. Con esta motivación, proponemos un servicio de integración de datos basado en ontologías para recopilar, integrar y almacenar información de traza web procedente de distintas fuentes.Estas se consolidan en un repositorio RDF diseñado para proporcionar semántica común a los datos de análisis y dar servicio homogéneo a algoritmos de Minería de Datos. El servicio propuesto se ha validado mediante traza digital real (Google Analitics y Piwik) de 15 tiendas virtuales de diferentes sectores y países europeos (UK, España, Grecia y Alemania) durante varios meses de actividad.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El principal objetivo de este trabajo de grado ha sido diseñar un plan estratégico de medios digitales para el lanzamiento de un nuevo producto para una compañía. En este trabajo, se han establecido parámetros como presupuesto, qué tipos de medios digitales serán usados y el porqué de ellos, se estableció las actividades por cada red social a usar y se muestra la relación costo – beneficio de realizar pauta publicitaria en redes sociales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As usage metrics continue to attain an increasingly central role in library system assessment and analysis, librarians tasked with system selection, implementation, and support are driven to identify metric approaches that simultaneously require less technical complexity and greater levels of data granularity. Such approaches allow systems librarians to present evidence-based claims of platform usage behaviors while reducing the resources necessary to collect such information, thereby representing a novel approach to real-time user analysis as well as dual benefit in active and preventative cost reduction. As part of the DSpace implementation for the MD SOAR initiative, the Consortial Library Application Support (CLAS) division has begun test implementation of the Google Tag Manager analytic system in an attempt to collect custom analytical dimensions to track author- and university-specific download behaviors. Building on the work of Conrad , CLAS seeks to demonstrate that the GTM approach to custom analytics provides both granular metadata-based usage statistics in an approach that will prove extensible for additional statistical gathering in the future. This poster will discuss the methodology used to develop these custom tag approaches, the benefits of using the GTM model, and the risks and benefits associated with further implementation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Queensland University of Technology (QUT) was one of the first universities in Australia to establish an institutional repository. Launched in November 2003, the repository (QUT ePrints) uses the EPrints open source repository software (from Southampton) and has enjoyed the benefit of an institutional deposit mandate since January 2004. Currently (April 2012), the repository holds over 36,000 records, including 17,909 open access publications with another 2,434 publications embargoed but with mediated access enabled via the ‘Request a copy’ button which is a feature of the EPrints software. At QUT, the repository is managed by the library.QUT ePrints (http://eprints.qut.edu.au) The repository is embedded into a number of other systems at QUT including the staff profile system and the University’s research information system. It has also been integrated into a number of critical processes related to Government reporting and research assessment. Internally, senior research administrators often look to the repository for information to assist with decision-making and planning. While some statistics could be drawn from the advanced search feature and the existing download statistics feature, they were rarely at the level of granularity or aggregation required. Getting the information from the ‘back end’ of the repository was very time-consuming for the Library staff. In 2011, the Library funded a project to enhance the range of statistics which would be available from the public interface of QUT ePrints. The repository team conducted a series of focus groups and individual interviews to identify and prioritise functionality requirements for a new statistics ‘dashboard’. The participants included a mix research administrators, early career researchers and senior researchers. The repository team identified a number of business criteria (eg extensible, support available, skills required etc) and then gave each a weighting. After considering all the known options available, five software packages (IRStats, ePrintsStats, AWStats, BIRT and Google Urchin/Analytics) were thoroughly evaluated against a list of 69 criteria to determine which would be most suitable. The evaluation revealed that IRStats was the best fit for our requirements. It was deemed capable of meeting 21 out of the 31 high priority criteria. Consequently, IRStats was implemented as the basis for QUT ePrints’ new statistics dashboards which were launched in Open Access Week, October 2011. Statistics dashboards are now available at four levels; whole-of-repository level, organisational unit level, individual author level and individual item level. The data available includes, cumulative total deposits, time series deposits, deposits by item type, % fulltexts, % open access, cumulative downloads, time series downloads, downloads by item type, author ranking, paper ranking (by downloads), downloader geographic location, domains, internal v external downloads, citation data (from Scopus and Web of Science), most popular search terms, non-search referring websites. The data is displayed in charts, maps and table format. The new statistics dashboards are a great success. Feedback received from staff and students has been very positive. Individual researchers have said that they have found the information to be very useful when compiling a track record. It is now very easy for senior administrators (including the Deputy Vice Chancellor-Research) to compare the full-text deposit rates (i.e. mandate compliance rates) across organisational units. This has led to increased ‘encouragement’ from Heads of School and Deans in relation to the provision of full-text versions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 2005, the Association of American Publishers (AAP) and the Authors Guild (AG) sued Google for ‘massive copyright infringement’ for the mass digitization of books for the Google Book Search Project. In 2008, the parties reached a settlement, pending court approval. If approved, the settlement could have far-reaching consequences for authors, libraries, educational institutions and the reading public. In this article, I provide an overview of the Google Book Search Settlement. Firstly, I explain the Google Book Search Project, the legal questions raised by the Project and the lawsuit brought against Google. Secondly, I examine the terms of the Settlement Agreement, including what rights were granted between the parties and what rights were granted to the general public. Finally, I consider the implications of the settlement for Australia. The Settlement Agreement, and consequently the broader scope of the Google Book Search Project, is currently limited to the United States. In this article I consider whether the Project could be extended to Australia at a later date, how Google might go about doing this, and the implications of such an extension under the Copyright Act 1968 (Cth). I argue that without prior agreements with rightholders, our limited exceptions to copyright infringement mean that Google is unlikely to be able to extend the full scope of the Project to Australia without infringing copyright.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acoustic recordings play an increasingly important role in monitoring terrestrial and aquatic environments. However, rapid advances in technology make it possible to accumulate thousands of hours of recordings, more than ecologists can ever listen to. Our approach to this big-data challenge is to visualize the content of long-duration audio recordings on multiple scales, from minutes, hours, days to years. The visualization should facilitate navigation and yield ecologically meaningful information prior to listening to the audio. To construct images, we calculate acoustic indices, statistics that describe the distribution of acoustic energy and reflect content of ecological interest. We combine various indices to produce false-color spectrogram images that reveal acoustic content and facilitate navigation. The technical challenge we investigate in this work is how to navigate recordings that are days or even months in duration. We introduce a method of zooming through multiple temporal scales, analogous to Google Maps. However, the “landscape” to be navigated is not geographical and not therefore intrinsically visual, but rather a graphical representation of the underlying audio. We describe solutions to navigating spectrograms that range over three orders of magnitude of temporal scale. We make three sets of observations: 1. We determine that at least ten intermediate scale steps are required to zoom over three orders of magnitude of temporal scale; 2. We determine that three different visual representations are required to cover the range of temporal scales; 3. We present a solution to the problem of maintaining visual continuity when stepping between different visual representations. Finally, we demonstrate the utility of the approach with four case studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In competitive combat sporting environments like boxing, the statistics on a boxer's performance, including the amount and type of punches thrown, provide a valuable source of data and feedback which is routinely used for coaching and performance improvement purposes. This paper presents a robust framework for the automatic classification of a boxer's punches. Overhead depth imagery is employed to alleviate challenges associated with occlusions, and robust body-part tracking is developed for the noisy time-of-flight sensors. Punch recognition is addressed through both a multi-class SVM and Random Forest classifiers. A coarse-to-fine hierarchical SVM classifier is presented based on prior knowledge of boxing punches. This framework has been applied to shadow boxing image sequences taken at the Australian Institute of Sport with 8 elite boxers. Results demonstrate the effectiveness of the proposed approach, with the hierarchical SVM classifier yielding a 96% accuracy, signifying its suitability for analysing athletes punches in boxing bouts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Geologic Atlas of the United States was digitized and stored in the Texas A&M University institutional repository. Extensive metadata was created which emphasized the geographic and geologic aspects of the material. The map sheets were also convered into kml files for Google Earth and ESRI shape files for use in GIS. A Yahoo!Map interface allows for visualization of the locations of each folio and user friendly browsing across the collection. Details of the project will be discussed, including the selection, digitization methods and standards, preservation, metadata, web presence and staffing. Its storage in DSpace, assortment of publicity outlets, and its inclusion in targeted clearinghouses expand its potential use to national and international audiences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A história dos Censos no Brasil mostra que a preocupação com a componente territorial em levantamentos estatísticos, surgiu no recenseamento de 1940, quando, pela primeira vez, o IBGE procurou retratar aspectos da realidade geográfica, de interesse para a operação de coleta, em bases cartográficas, uma tarefa complexa devido à grande extensão do território brasileiro e principalmente no que se refere à qualidade do material cartográfico disponível à época. Atualmente crescem as demandas em nosso país, por informações cada vez mais detalhadas e geograficamente posicionadas. Governadores e prefeitos, órgãos de planejamento municipais e estaduais, investidos de maior autonomia e de novas responsabilidades após a Constituição de 1988, dependem hoje como nunca dos censos para definirem suas políticas públicas, com base em informações atualizadas sobre a população sob suas jurisdições. Entretanto, as demandas por informações agregadas à posição também vêm de outras esferas, que vão do setor não-governamental e privado ao governo federal, fazendo com que muito aumentasse a relevância dos censos e por conseqüência os resultados das pesquisas. Para atender a grande demanda, o IBGE vem continuamente aperfeiçoando o que denominamos de Base Territorial, que é um sistema integrado de informações de natureza geográfica e alfanumérica e se constitui no principal requisito para a garantia da adequada cobertura das operações de levantamento censitário. Face a este novo cenário, o IBGE iniciou a elaboração de mapas da base territorial em meio digital, durante as ações preparatórias para o Censo 2000, se deparando com as dificuldades de integração das áreas urbanas e rurais e a baixa qualidade dos insumos de mapeamento em escala cadastral, disponível nas áreas menos desenvolvidas, pois a Instituição não é produtora de mapeamento em escala cadastral. A metodologia proposta visa melhorar a qualidade dos Mapas de Setores Urbanos MSU, com a utilização de imagens Google Earth, a partir software MicroStation 95, periféricos e aplicativos de conversão disponíveis no IBGE, com o estabelecimento de uma nova rotina de trabalho para produção e substituição dos mapas de setores urbanos, de forma a garantir uma maior representatividade territorial dos dados estatísticos para divulgação.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of mass digitization projects, such as the Google Book Search, a peculiar shift has occurred in the way that copyright works are dealt with. Contrary to what has so far been the case, works are turned into machine-readable data to be automatically processed for various purposes without the expression of works being displayed to the public. In the Google Book Settlement Agreement, this new kind of usage is referred to as ‘non-display uses’ of digital works. The legitimacy of these uses has not yet been tested by Courts and does not comfortably fit in the current copyright doctrine, plainly because the works are not used as works but as something else, namely as data. Since non-display uses may prove to be a very lucrative market in the near future, with the potential to affect the way people use copyright works, we examine non-display uses under the prism of copyright principles to determine the boundaries of their legitimacy. Through this examination, we provide a categorization of the activities carried out under the heading of ‘non-display uses’, we examine their lawfulness under the current copyright doctrine and approach the phenomenon from the spectrum of data protection law that could apply, by analogy, to the use of copyright works as processable data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digitization is big news; it's a good idea; and it's inevitable. But let's not get all goggle-eyed over Google right away. Here are five reasons not to tear up your library card quite yet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to evaluate the influence of digitization parameters on periapical radiographic image quality, with regard to anatomic landmarks. Digitized images (n = 160) were obtained using a flatbed scanner with resolutions of 300, 600 and 2400 dpi. The radiographs of 2400 dpi were decreased to 300 and 600 dpi before storage. Digitizations were performed with and without black masking using 8-bit and 16-bit grayscale and saved in TIFF format. Four anatomic landmarks were classified by two observers (very good, good, moderate, regular, poor), in two random sessions. Intraobserver and interobserver agreements were evaluated by Kappa statistics. Inter and intraobserver agreements ranged according to the anatomic landmarks and resolution used. The results obtained demonstrated that the cement enamel junction was the anatomic landmark that presented the poorest concordance. The use of black masking provided better results in the digitized image. The use of a mask to cover radiographs during digitization is necessary. Therefore, the concordance ranged from regular to moderate for the intraobserver evaluation and concordance ranged from regular to poor for interobserver evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Con l'avanzare della tecnologia, i Big Data hanno assunto un ruolo importante. In questo lavoro è stato implementato, in linguaggio Java, un software volto alla analisi dei Big Data mediante R e Hadoop/MapReduce. Il software è stato utilizzato per analizzare le tracce rilasciate da Google, riguardanti il funzionamento dei suoi data center.