896 resultados para Web log analysis
Resumo:
El treball que a continuació es presenta consisteix en l'anàlisi, el disseny i la implementació d'una aplicació web que permeti, principalment, la consulta de preus dels combustibles de les benzineres estatals. La plataforma tecnològica escollida ha estat J2EE ja que aquesta tecnologia, basada principalment en el llenguatge Java, permet programar i executar aplicacions en un servidor d'una forma econòmica i alhora professional. L'aplicació segueix principalment el patró arquitectònic MVC (Model Vista Controlador), utilitzant com a marc de treball Struts2 i per a la persistència de dades Hibernate juntament amb el patró DAO.
Resumo:
In this paper we look at how a web-based social software can be used to make qualitative data analysis of online peer-to-peer learning experiences. Specifically, we propose to use Cohere, a web-based social sense-making tool, to observe, track, annotate and visualize discussion group activities in online courses. We define a specific methodology for data observation and structuring, and present results of the analysis of peer interactions conducted in discussion forum in a real case study of a P2PU course. Finally we discuss how network visualization and analysis can be used to gather a better understanding of the peer-to-peer learning experience. To do so, we provide preliminary insights on the social, dialogical and conceptual connections that have been generated within one online discussion group.
Resumo:
A partir de diferents enquestes de satisfacció institucional i de l'anàlisi de l'arxiu 'log' del servidor de la biblioteca respecte a l'ús i al comportament dels usuaris, es va detectar que cada cop era més complex accedir als continguts i serveis de manera proporcional al creixement d'aquests darrers i de l'augment del nombre d'usuaris. El creixement dels recursos i de les diferents aplicacions desenvolupats a la Biblioteca Virtual de la UOC (BUOC) va fer necessari la selecció i la implementació d'un motor de cerca que facilités de manera global l'accés als recursos d'informació i als serveis oferts a la comunitat virtual de la UOC d'acord amb la tipologia d'usuari, l'idioma i el seu entorn d'aprenentatge. En aquest article s'exposa el procés d'anàlisi de diferents productes i la implementació de Verity a la BUOC amb els desenvolupaments realitzats en les diferents aplicacions perquè el motor de cerca pugui fer la seva funció.
Resumo:
This presentation aims to make understandable the use and application context of two Webometrics techniques, the logs analysis and Google Analytics, which currently coexist in the Virtual Library of the UOC. In this sense, first of all it is provided a comprehensive introduction to webometrics and then it is analysed the case of the UOC's Virtual Library focusing on the assimilation of these techniques and the considerations underlying their use, and covering in a holistic way the process of gathering, processing and data exploitation. Finally there are also provided guidelines for the interpretation of the metric variables obtained.
Resumo:
The application of compositional data analysis through log ratio trans-formations corresponds to a multinomial logit model for the shares themselves.This model is characterized by the property of Independence of Irrelevant Alter-natives (IIA). IIA states that the odds ratio in this case the ratio of shares is invariant to the addition or deletion of outcomes to the problem. It is exactlythis invariance of the ratio that underlies the commonly used zero replacementprocedure in compositional data analysis. In this paper we investigate using thenested logit model that does not embody IIA and an associated zero replacementprocedure and compare its performance with that of the more usual approach ofusing the multinomial logit model. Our comparisons exploit a data set that com-bines voting data by electoral division with corresponding census data for eachdivision for the 2001 Federal election in Australia
Resumo:
Compositional data naturally arises from the scientific analysis of the chemicalcomposition of archaeological material such as ceramic and glass artefacts. Data of thistype can be explored using a variety of techniques, from standard multivariate methodssuch as principal components analysis and cluster analysis, to methods based upon theuse of log-ratios. The general aim is to identify groups of chemically similar artefactsthat could potentially be used to answer questions of provenance.This paper will demonstrate work in progress on the development of a documentedlibrary of methods, implemented using the statistical package R, for the analysis ofcompositional data. R is an open source package that makes available very powerfulstatistical facilities at no cost. We aim to show how, with the aid of statistical softwaresuch as R, traditional exploratory multivariate analysis can easily be used alongside, orin combination with, specialist techniques of compositional data analysis.The library has been developed from a core of basic R functionality, together withpurpose-written routines arising from our own research (for example that reported atCoDaWork'03). In addition, we have included other appropriate publicly availabletechniques and libraries that have been implemented in R by other authors. Availablefunctions range from standard multivariate techniques through to various approaches tolog-ratio analysis and zero replacement. We also discuss and demonstrate a smallselection of relatively new techniques that have hitherto been little-used inarchaeometric applications involving compositional data. The application of the libraryto the analysis of data arising in archaeometry will be demonstrated; results fromdifferent analyses will be compared; and the utility of the various methods discussed
Resumo:
At CoDaWork'03 we presented work on the analysis of archaeological glass composi-tional data. Such data typically consist of geochemical compositions involving 10-12variables and approximates completely compositional data if the main component, sil-ica, is included. We suggested that what has been termed `crude' principal componentanalysis (PCA) of standardized data often identi ed interpretable pattern in the datamore readily than analyses based on log-ratio transformed data (LRA). The funda-mental problem is that, in LRA, minor oxides with high relative variation, that maynot be structure carrying, can dominate an analysis and obscure pattern associatedwith variables present at higher absolute levels. We investigate this further using sub-compositional data relating to archaeological glasses found on Israeli sites. A simplemodel for glass-making is that it is based on a `recipe' consisting of two `ingredients',sand and a source of soda. Our analysis focuses on the sub-composition of componentsassociated with the sand source. A `crude' PCA of standardized data shows two clearcompositional groups that can be interpreted in terms of di erent recipes being used atdi erent periods, reected in absolute di erences in the composition. LRA analysis canbe undertaken either by normalizing the data or de ning a `residual'. In either case,after some `tuning', these groups are recovered. The results from the normalized LRAare di erently interpreted as showing that the source of sand used to make the glassdi ered. These results are complementary. One relates to the recipe used. The otherrelates to the composition (and presumed sources) of one of the ingredients. It seemsto be axiomatic in some expositions of LRA that statistical analysis of compositionaldata should focus on relative variation via the use of ratios. Our analysis suggests thatabsolute di erences can also be informative
Resumo:
This paper presents a tool for the analysis and regeneration of Web contents, implemented through XML and Java. At the moment, the Web content delivery from server to clients is carried out without taking into account clients' characteristics. Heterogeneous and diverse characteristics, such as user's preferences, different capacities of the client's devices, different types of access, state of the network and current load on the server, directly affect the behavior of Web services. On the other hand, the growing use of multimedia objects in the design of Web contents is made without taking into account this diversity and heterogeneity. It affects, even more, the appropriate content delivery. Thus, the objective of the presented tool is the treatment of Web pages taking into account the mentioned heterogeneity and adapting contents in order to improve the performance on the Web
Resumo:
A compositional time series is obtained when a compositional data vector is observed atdifferent points in time. Inherently, then, a compositional time series is a multivariatetime series with important constraints on the variables observed at any instance in time.Although this type of data frequently occurs in situations of real practical interest, atrawl through the statistical literature reveals that research in the field is very much in itsinfancy and that many theoretical and empirical issues still remain to be addressed. Anyappropriate statistical methodology for the analysis of compositional time series musttake into account the constraints which are not allowed for by the usual statisticaltechniques available for analysing multivariate time series. One general approach toanalyzing compositional time series consists in the application of an initial transform tobreak the positive and unit sum constraints, followed by the analysis of the transformedtime series using multivariate ARIMA models. In this paper we discuss the use of theadditive log-ratio, centred log-ratio and isometric log-ratio transforms. We also presentresults from an empirical study designed to explore how the selection of the initialtransform affects subsequent multivariate ARIMA modelling as well as the quality ofthe forecasts
Resumo:
A joint distribution of two discrete random variables with finite support can be displayed as a two way table of probabilities adding to one. Assume that this table hasn rows and m columns and all probabilities are non-null. This kind of table can beseen as an element in the simplex of n · m parts. In this context, the marginals areidentified as compositional amalgams, conditionals (rows or columns) as subcompositions. Also, simplicial perturbation appears as Bayes theorem. However, the Euclideanelements of the Aitchison geometry of the simplex can also be translated into the tableof probabilities: subspaces, orthogonal projections, distances.Two important questions are addressed: a) given a table of probabilities, which isthe nearest independent table to the initial one? b) which is the largest orthogonalprojection of a row onto a column? or, equivalently, which is the information in arow explained by a column, thus explaining the interaction? To answer these questionsthree orthogonal decompositions are presented: (1) by columns and a row-wise geometric marginal, (2) by rows and a columnwise geometric marginal, (3) by independenttwo-way tables and fully dependent tables representing row-column interaction. Animportant result is that the nearest independent table is the product of the two (rowand column)-wise geometric marginal tables. A corollary is that, in an independenttable, the geometric marginals conform with the traditional (arithmetic) marginals.These decompositions can be compared with standard log-linear models.Key words: balance, compositional data, simplex, Aitchison geometry, composition,orthonormal basis, arithmetic and geometric marginals, amalgam, dependence measure,contingency table
Resumo:
BACKGROUND. Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. RESULTS. To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI) that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. CONCLUSIONS. The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others).
Resumo:
Extracción de conocimiento de los log generados por un servidor web aplicando técnicas de minería de datos.
Resumo:
The international Functional Annotation Of the Mammalian Genomes 4 (FANTOM4) research collaboration set out to better understand the transcriptional network that regulates macrophage differentiation and to uncover novel components of the transcriptome employing a series of high-throughput experiments. The primary and unique technique is cap analysis of gene expression (CAGE), sequencing mRNA 5'-ends with a second-generation sequencer to quantify promoter activities even in the absence of gene annotation. Additional genome-wide experiments complement the setup including short RNA sequencing, microarray gene expression profiling on large-scale perturbation experiments and ChIP-chip for epigenetic marks and transcription factors. All the experiments are performed in a differentiation time course of the THP-1 human leukemic cell line. Furthermore, we performed a large-scale mammalian two-hybrid (M2H) assay between transcription factors and monitored their expression profile across human and mouse tissues with qRT-PCR to address combinatorial effects of regulation by transcription factors. These interdependent data have been analyzed individually and in combination with each other and are published in related but distinct papers. We provide all data together with systematic annotation in an integrated view as resource for the scientific community (http://fantom.gsc.riken.jp/4/). Additionally, we assembled a rich set of derived analysis results including published predicted and validated regulatory interactions. Here we introduce the resource and its update after the initial release.
Resumo:
En el presente estudio se presenta un análisis de los contenidos antivacunas en páginas web publicado en Internet, tomando como referencia el trabajo ya realizado en el año 2005 por Zimmerman et al. (2005) en lengua inglesa. El objetivo es identificar, analizar y caracterizar los contenidos web antivacunas en Internet en lengua castellana y catalana.
Resumo:
BACKGROUND It is not clear to what extent educational programs aimed at promoting diabetes self-management in ethnic minority groups are effective. The aim of this work was to systematically review the effectiveness of educational programs to promote the self-management of racial/ethnic minority groups with type 2 diabetes, and to identify programs' characteristics associated with greater success. METHODS We undertook a systematic literature review. Specific searches were designed and implemented for Medline, EMBASE, CINAHL, ISI Web of Knowledge, Scirus, Current Contents and nine additional sources (from inception to October 2012). We included experimental and quasi-experimental studies assessing the impact of educational programs targeted to racial/ethnic minority groups with type 2 diabetes. We only included interventions conducted in countries members of the OECD. Two reviewers independently screened citations. Structured forms were used to extract information on intervention characteristics, effectiveness, and cost-effectiveness. When possible, we conducted random-effects meta-analyses using standardized mean differences to obtain aggregate estimates of effect size with 95% confidence intervals. Two reviewers independently extracted all the information and critically appraised the studies. RESULTS We identified thirty-seven studies reporting on thirty-nine educational programs. Most of them were conducted in the US, with African American or Latino participants. Most programs obtained some benefits over standard care in improving diabetes knowledge, self-management behaviors and clinical outcomes. A meta-analysis of 20 randomized controlled trials (3,094 patients) indicated that the programs produced a reduction in glycated hemoglobin of -0.31% (95% CI -0.48% to -0.14%). Diabetes knowledge and self-management measures were too heterogeneous to pool. Meta-regressions showed larger reduction in glycated hemoglobin in individual and face to face delivered interventions, as well as in those involving peer educators, including cognitive reframing techniques, and a lower number of teaching methods. The long-term effects remain unknown and cost-effectiveness was rarely estimated. CONCLUSIONS Diabetes self-management educational programs targeted to racial/ethnic minority groups can produce a positive effect on diabetes knowledge and on self-management behavior, ultimately improving glycemic control. Future programs should take into account the key characteristics identified in this review.