896 resultados para Web log analysis
Resumo:
This paper proposes a regression model considering the modified Weibull distribution. This distribution can be used to model bathtub-shaped failure rate functions. Assuming censored data, we consider maximum likelihood and Jackknife estimators for the parameters of the model. We derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and we also present some ways to perform global influence. Besides, for different parameter settings, sample sizes and censoring percentages, various simulations are performed and the empirical distribution of the modified deviance residual is displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended for a martingale-type residual in log-modified Weibull regression models with censored data. Finally, we analyze a real data set under log-modified Weibull regression models. A diagnostic analysis and a model checking based on the modified deviance residual are performed to select appropriate models. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
Obesity affects aspects of glucose homeostasis such as insulin secretion and insulin sensitivity. Hormones secreted by adipocytes like leptin mediate the metabolic consequences of obesity. Incretin hormones like glucagon-like peptide-1 (GLP-1) increase insulin secretion in response to changes in blood glucose concentration and have been proposed to regulate insulin secretion in fasting, overweight dogs. The aim of this study was to examine hormonal mechanisms by which adiposity alters glucose homeostasis, plasma insulin concentration, and insulin sensitivity in spontaneously overweight dogs.
Resumo:
The dearth of knowledge on the load resistance mechanisms of log houses and the need for developing numerical models that are capable of simulating the actual behaviour of these structures has pushed efforts to research the relatively unexplored aspects of log house construction. The aim of the research that is presented in this paper is to build a working model of a log house that will contribute toward understanding the behaviour of these structures under seismic loading. The paper presents the results of a series of shaking table tests conducted on a log house and goes on to develop a numerical model of the tested house. The finite element model has been created in SAP2000 and validated against the experimental results. The modelling assumptions and the difficulties involved in the process have been described and, finally, a discussion on the effects of the variation of different physical and material parameters on the results yielded by the model has been drawn up.
Resumo:
We compare correspondance análisis to the logratio approach based on compositional data. We also compare correspondance análisis and an alternative approach using Hellinger distance, for representing categorical data in a contingency table. We propose a coefficient which globally measures the similarity between these approaches. This coefficient can be decomposed into several components, one component for each principal dimension, indicating the contribution of the dimensions to the difference between the two representations. These three methods of representation can produce quite similar results. One illustrative example is given
Resumo:
Este artigo apresenta uma abordagem sobre a avaliação do acesso a periódicos eletrônicos disponibilizados na World Wide Web por meio da análise do arquivo de log de acesso. O arquivo de log de acesso da revista Informação & Sociedade: Estudos é processado e apresentado como um exemplo de aplicação do uso de uma ferramenta automatizada de análise para arquivo de iog de acesso. As características inerentes à análise do arquivo de log de acesso são apresentadas e discutidas.
Resumo:
It is estimated that around 230 people die each year due to radon (222Rn) exposure in Switzerland. 222Rn occurs mainly in closed environments like buildings and originates primarily from the subjacent ground. Therefore it depends strongly on geology and shows substantial regional variations. Correct identification of these regional variations would lead to substantial reduction of 222Rn exposure of the population based on appropriate construction of new and mitigation of already existing buildings. Prediction of indoor 222Rn concentrations (IRC) and identification of 222Rn prone areas is however difficult since IRC depend on a variety of different variables like building characteristics, meteorology, geology and anthropogenic factors. The present work aims at the development of predictive models and the understanding of IRC in Switzerland, taking into account a maximum of information in order to minimize the prediction uncertainty. The predictive maps will be used as a decision-support tool for 222Rn risk management. The construction of these models is based on different data-driven statistical methods, in combination with geographical information systems (GIS). In a first phase we performed univariate analysis of IRC for different variables, namely the detector type, building category, foundation, year of construction, the average outdoor temperature during measurement, altitude and lithology. All variables showed significant associations to IRC. Buildings constructed after 1900 showed significantly lower IRC compared to earlier constructions. We observed a further drop of IRC after 1970. In addition to that, we found an association of IRC with altitude. With regard to lithology, we observed the lowest IRC in sedimentary rocks (excluding carbonates) and sediments and the highest IRC in the Jura carbonates and igneous rock. The IRC data was systematically analyzed for potential bias due to spatially unbalanced sampling of measurements. In order to facilitate the modeling and the interpretation of the influence of geology on IRC, we developed an algorithm based on k-medoids clustering which permits to define coherent geological classes in terms of IRC. We performed a soil gas 222Rn concentration (SRC) measurement campaign in order to determine the predictive power of SRC with respect to IRC. We found that the use of SRC is limited for IRC prediction. The second part of the project was dedicated to predictive mapping of IRC using models which take into account the multidimensionality of the process of 222Rn entry into buildings. We used kernel regression and ensemble regression tree for this purpose. We could explain up to 33% of the variance of the log transformed IRC all over Switzerland. This is a good performance compared to former attempts of IRC modeling in Switzerland. As predictor variables we considered geographical coordinates, altitude, outdoor temperature, building type, foundation, year of construction and detector type. Ensemble regression trees like random forests allow to determine the role of each IRC predictor in a multidimensional setting. We found spatial information like geology, altitude and coordinates to have stronger influences on IRC than building related variables like foundation type, building type and year of construction. Based on kernel estimation we developed an approach to determine the local probability of IRC to exceed 300 Bq/m3. In addition to that we developed a confidence index in order to provide an estimate of uncertainty of the map. All methods allow an easy creation of tailor-made maps for different building characteristics. Our work is an essential step towards a 222Rn risk assessment which accounts at the same time for different architectural situations as well as geological and geographical conditions. For the communication of 222Rn hazard to the population we recommend to make use of the probability map based on kernel estimation. The communication of 222Rn hazard could for example be implemented via a web interface where the users specify the characteristics and coordinates of their home in order to obtain the probability to be above a given IRC with a corresponding index of confidence. Taking into account the health effects of 222Rn, our results have the potential to substantially improve the estimation of the effective dose from 222Rn delivered to the Swiss population.
Resumo:
Työn tarkoituksena oli tutkia sisältö- ja diskurssianalyysin avulla kuinka yritykset viestivät asiakasreferenssejä verkkosivuillaan. Työssä keskityttiin tutkimaan yritysten referenssikuvausten teemoja ja diskursseja, sekä sitä kuinka referenssisuhde rakentuu diskursiivisesti referenssikuvauksissa. Tutkimukseen valittiin kolme suomalaista ICT-alan yritystä: Nokia, TietoEnator ja F-Secure. Aineisto koostuu 140:stä yritysten WWW-sivuilta kerätystä referenssikuvauksesta. Sisältöanalyysin tuloksena havaittiin, että referenssikuvaukset keskittyvät kuvaamaan yksittäisiä tuote- tai projektitoimituksia referenssiasiakkaille kyseisten asiakassuhteiden valossa. Analyysin tuloksena tunnistettiin kolme diskurssia: hyötydiskurssi, sitoutumisen diskurssi sekä teknologisen eksperttiyden diskurssi. Diskurssit paljastavat referenssikuvausten retoriset keinot ja konstruoivat referenssisuhteen ja toimittajan subjektiposition eri näkökulmista. Pääpaino referenssikuvauksissa on toimittajan ratkaisun tuomissa hyödyissä. Diskurssit tuottavat referenssisuhteesta kuvan hyötyjä tuovana ja läheisenä asiakassuhteena, joka tarjoaa väylän ulkopuolisiin kyvykkyyksiin ja teknologioihin. Toimittaja esitetään referenssikuvauksissa diskurssista riippuen hyötyjen tuojana, luotettavana partnerina sekä kokeneena eksperttinä. Referenssiasiakas sen sijaan esitetään vain yhdestä näkökulmasta stereotyyppisesti tärkeänä ja tyytyväisenä asiakkaana.
Resumo:
This work is devoted to the problem of reconstructing the basis weight structure at paper web with black{box techniques. The data that is analyzed comes from a real paper machine and is collected by an o®-line scanner. The principal mathematical tool used in this work is Autoregressive Moving Average (ARMA) modelling. When coupled with the Discrete Fourier Transform (DFT), it gives a very flexible and interesting tool for analyzing properties of the paper web. Both ARMA and DFT are independently used to represent the given signal in a simplified version of our algorithm, but the final goal is to combine the two together. Ljung-Box Q-statistic lack-of-fit test combined with the Root Mean Squared Error coefficient gives a tool to separate significant signals from noise.
Resumo:
Background Analysing the observed differences for incidence or mortality of a particular disease between two different situations (such as time points, geographical areas, gender or other social characteristics) can be useful both for scientific or administrative purposes. From an epidemiological and public health point of view, it is of great interest to assess the effect of demographic factors in these observed differences in order to elucidate the effect of the risk of developing a disease or dying from it. The method proposed by Bashir and Estève, which splits the observed variation into three components: risk, population structure and population size is a common choice at practice. Results A web-based application, called RiskDiff has been implemented (available at http://rht.iconcologia.net/riskdiff.htm webcite), to perform this kind of statistical analyses, providing text and graphical summaries. Code from the implemented functions in R is also provided. An application to cancer mortality data from Catalonia is used for illustration. Conclusions Combining epidemiological with demographical factors is crucial for analysing incidence or mortality from a disease, especially if the population pyramids show substantial differences. The tool implemented may serve to promote and divulgate the use of this method to give advice for epidemiologic interpretation and decision making in public health.
Resumo:
Online paper web analysis relies on traversing scanners that criss-cross on top of a rapidly moving paper web. The sensors embedded in the scanners measure many important quality variables of paper, such as basis weight, caliper and porosity. Most of these quantities are varying a lot and the measurements are noisy at many different scales. The zigzagging nature of scanning makes it difficult to separate machine direction (MD) and cross direction (CD) variability from one another. For improving the 2D resolution of the quality variables above, the paper quality control team at the Department of Mathematics and Physics at LUT has implemented efficient Kalman filtering based methods that currently use 2D Fourier series. Fourier series are global and therefore resolve local spatial detail on the paper web rather poorly. The target of the current thesis is to study alternative wavelet based representations as candidates to replace the Fourier basis for a higher resolution spatial reconstruction of these quality variables. The accuracy of wavelet compressed 2D web fields will be compared with corresponding truncated Fourier series based fields.
Resumo:
This paper presents a prototype of an interactive web-GIS tool for risk analysis of natural hazards, in particular for floods and landslides, based on open-source geospatial software and technologies. The aim of the presented tool is to assist the experts (risk managers) in analysing the impacts and consequences of a certain hazard event in a considered region, providing an essential input to the decision-making process in the selection of risk management strategies by responsible authorities and decision makers. This tool is based on the Boundless (OpenGeo Suite) framework and its client-side environment for prototype development, and it is one of the main modules of a web-based collaborative decision support platform in risk management. Within this platform, the users can import necessary maps and information to analyse areas at risk. Based on provided information and parameters, loss scenarios (amount of damages and number of fatalities) of a hazard event are generated on the fly and visualized interactively within the web-GIS interface of the platform. The annualized risk is calculated based on the combination of resultant loss scenarios with different return periods of the hazard event. The application of this developed prototype is demonstrated using a regional data set from one of the case study sites, Fella River of northeastern Italy, of the Marie Curie ITN CHANGES project.
Resumo:
Webben är en enorm källa för information. Innehållet på webbsidorna är en synlig typ av information, men webben innehåller även information av en annan typ, en mera gömd typ i form av sambanden och nätverken som hyperlänkarna skapar mellan webbsajterna och –sidorna som de kopplar ihop. Forskningsområdet webometri ämnar, bland annat, att skapa ny kunskap ur denna gömda information som finns inbyggt i hyperlänkarna samt att skapa förståelse för hurudana fenomen och förhållanden utanför webben kan finnas representerade i hyperlänkarna. Målet med denna forskning var att öka förståelse för användningen av hyperlänkar på webben och speciellt kommunernas användning av hyperlänkar. Denna forskning undersökte hur kommunerna i Egentliga Finland skapade och mottog hyperlänkar samt hurudana nätverk formades av dessa hyperlänkar. Forskningen kartlade nätverk av direkta länkar mellan kommunerna och av samlänkar till och från kommunerna och undersökte ifall dessa nätverk kunde användas för att undersöka geopolitiska förhållanden och samarbete mellan kommunerna i Egentliga Finland. De övergripande forskningsfrågorna som har besvarats i denna forskning är: 1) Från ett webometriskt perspektiv, hur använder kommunerna i Egentliga Finland webben? 2) Kan hyperlänkar (direkta länkar och samlänkar) användas för att kartlägga geopolitiska förhållanden och samarbete mellan kommuner? 3) Vilka är de viktigaste motiveringarna för att skapa länkar mellan, till och från kommunernas webbsajter? Denna forskning kom till ovanligt tydliga resultat för en webometrisk forskning, både när det gäller upptäckta geografiska faktorer som påverkar hyperlänkningarna och de klassificerade motivationerna för att skapa länkarna. Resultaten visade att de direkta hyperlänkarna mellan kommunerna kan användas för att kartlägga geopolitiska förhållanden och samarbete mellan kommunerna för att de direkta länkarna var motiverade av officiella orsaker och de var klart påverkade av distansen mellan kommunerna och av de ekonomiska regionerna. Samlänkningarna in till kommunerna visade sig fungera som ett mått för geografisk likhet mellan kommunerna, medan samlänkningarna ut från kommunerna visade potential för att kunna användas till för att kartlägga kommunernas gemensamma intressen. Forskningen kontribuerade även till utvecklandet av forskningsområdet webometri. En del av de viktigaste kontributionerna av denna forskning var utvecklandet av nya metoder för webometrisk forskning samt att öka kunskap om hur existerande metoder från nätverksanalys kan användas effektivt för webometrisk forskning. Resultaten från denna forskning och de utvecklade metoderna kan användas för snabba kartläggningar av diverse förhållanden mellan olika organisationer och företag genom att använda information gratis tillgängligt på webben.
Resumo:
This study examines the efficiency of search engine advertising strategies employed by firms. The research setting is the online retailing industry, which is characterized by extensive use of Web technologies and high competition for market share and profitability. For Internet retailers, search engines are increasingly serving as an information gateway for many decision-making tasks. In particular, Search engine advertising (SEA) has opened a new marketing channel for retailers to attract new customers and improve their performance. In addition to natural (organic) search marketing strategies, search engine advertisers compete for top advertisement slots provided by search brokers such as Google and Yahoo! through keyword auctions. The rationale being that greater visibility on a search engine during a keyword search will capture customers' interest in a business and its product or service offerings. Search engines account for most online activities today. Compared with the slow growth of traditional marketing channels, online search volumes continue to grow at a steady rate. According to the Search Engine Marketing Professional Organization, spending on search engine marketing by North American firms in 2008 was estimated at $13.5 billion. Despite the significant role SEA plays in Web retailing, scholarly research on the topic is limited. Prior studies in SEA have focused on search engine auction mechanism design. In contrast, research on the business value of SEA has been limited by the lack of empirical data on search advertising practices. Recent advances in search and retail technologies have created datarich environments that enable new research opportunities at the interface of marketing and information technology. This research uses extensive data from Web retailing and Google-based search advertising and evaluates Web retailers' use of resources, search advertising techniques, and other relevant factors that contribute to business performance across different metrics. The methods used include Data Envelopment Analysis (DEA), data mining, and multivariate statistics. This research contributes to empirical research by analyzing several Web retail firms in different industry sectors and product categories. One of the key findings is that the dynamics of sponsored search advertising vary between multi-channel and Web-only retailers. While the key performance metrics for multi-channel retailers include measures such as online sales, conversion rate (CR), c1ick-through-rate (CTR), and impressions, the key performance metrics for Web-only retailers focus on organic and sponsored ad ranks. These results provide a useful contribution to our organizational level understanding of search engine advertising strategies, both for multi-channel and Web-only retailers. These results also contribute to current knowledge in technology-driven marketing strategies and provide managers with a better understanding of sponsored search advertising and its impact on various performance metrics in Web retailing.
Resumo:
Parent–school relationships contribute significantly to the quality of students’ education. The Internet, in turn, has started to influence individuals’ way social communication and most school boards in Ontario now use the Internet to communicate with parents, which helps build parent–school relationships. This project comprised a conceptual analysis of how the Internet enhances parent–school relationships to support Ontario school board administrators seeking to implement such technology. The study’s literature review identified the links between Web 2.0 technology, parent–school relationships, and effective parent engagement. A conceptual framework of the features of Web 2.0 tools that promote social interaction was developed and used to analyze websites of three Ontario school boards. The analysis revealed that school board websites used static features such as email, newsletters, and announcements for communication and did not provide access to parents for providing feedback through Web 2.0 features such as instant messaging. General recommendations were made so that school board administrators have the opportunity to implement changes in their school community with feasible modifications. Overall, Web 2.0-based technologies such as interactive communication tools and social media hold the most promise for enhancing parent–school relationships because they can help not only overcome barriers of time and distance, but also improve the parents’ desire to be engaged in children’s education experiences.