896 resultados para Web log analysis
Resumo:
A ferramenta de propagação eletromagnética (EPT) fornece o tempo de propagação (Tpl) e a atenuação (A) de uma onda eletromagnética que se propaga num meio com perdas. Estas respostas da EPT são funções da permissividade dielétrica do meio. Existem vários modelos e fórmulas de misturas sobre a permissividade dielétrica de rochas reservatório que podem ser utilizados na interpretação da ferramenta de alta frequência. No entanto, as fórmulas de mistura não consideram a distribuição e a geometria do espaço poroso, e estes parâmetros são essenciais para que sejam obtidas respostas dielétricas mais próximas de uma rocha real. Foi selecionado um modelo baseado nos parâmetros descritos acima e este foi aplicado à dados dielétricos disponíveis na literatura. Foi obtida uma boa concordância entre as curvas teóricas e os dados experimentais, comprovando assim que a distribuição e a geometria dos poros têm que ser levadas em conta no desenvolvimento de um modelo realista. Foram conseguidas também funções de distribuição de razão de aspecto de poros, através das quais geramos várias curvas relacionando as respostas da EPT com diversas saturações de óleo/gás. Estas curvas foram aplicadas na análise de perfis. Como o modelo selecionado ajusta-se bem aos dados dielétricos disponíveis na literatura, torna-se atraente aplicá-lo à dados experimentais obtidos em rochas de campos brasileiros produtores de hidrocarbonetos para interpretação da EPT corrida em poços destes campos petrolíferos.
Resumo:
Webbasierte Medien haben in den letzten Jahren eine stetig steigende Bedeutung im Alltag Jugendlicher. Konträr zu diesem Befund haben mediendidaktische Modelle kaum Eingang in die Politische Bildung, speziell die Politikdidaktik der Schule, gefunden. Zudem wird kritisch vermerkt, dass beim Einsatz digitaler Medien an klassischen lerntheoretischen Konzepten festgehalten wird, obwohl instruktionale Lehr-Lern-Settings als kontraproduktiv im Zusammenhang mit webbasierten Medien angesehen werden. Dagegen ist die in der Politikdidaktik äußerste kontrovers rezipierte konstruktivistische Lerntheorie äußerst anschlussfähig an eine Planung und Durchführung von Unterricht, welche den Fokus speziell auf den Einsatz webbasierter Medien richtet. rnAufgrund dieser Ausgangslage ist es das Erkenntnisinteresse dieser Arbeit, konstruktivistische Bedingungen des Politikunterrichts auf theoretischer Ebene zu formulieren. Dazu werden zunächst die erkenntnistheoretischen Grundlagen des Konstruktivismus anhand der zentralen Begriffe Autopoiesis, Soziale Konstruktion und Viabilität erörtert. In einem zweiten Schritt wird anhand der erkenntnistheoretischen Grundlagen gezeigt, wie eine konstruktivistische Lerntheorie formuliert werden kann. Dabei wird der Lernvorgang im Gegensatz zu den gängigen Modellen des Behaviorismus und Kognitivismus aus Schülersicht beschrieben. Im Anschluss wird der Einfluss des konstruktivistischen Paradigmas und einer konstruktivistischen Lerntheorie auf die Politikdidaktik aufgezeigt. rnAus dieser Analyse folgt, dass Schülerorientierung und Kontroversität sich als Kernprinzipien der Politischen Bildung uneingeschränkt anschlussfähig an die konstruktivistische Ausrichtung von Unterricht erweisen. Dabei wird die didaktische Rezeption und Umsetzungsproblematik dieser beiden didaktischen Prinzipien als Planungstools vor dem Hintergrund der Ziele des Politikunterrichts kritisch reflektiert. rnAus diesen lerntheoretischen Bedingungen wird abschließend analysiert, inwieweit webbasierte Medien bei der Umsetzung eines konstruktivistischen Politikunterrichts sinnvoll eingesetzt werden können. Dabei wird aufgezeigt, dass sich vor allem die in dieser Arbeit vorgestellten Medien Blog und Wiki für ein solches Vorhaben eignen. Die Analyse zeigt, dass eine enge Verzahnung von konstruktivistischer Lerntheorie und dem Einsatz webbasierter Medien im Politikunterricht möglich ist. Blogs und Wikis erweisen sich als geeignete Medien, um schülerorientierten und kontroversen Politikunterricht umzusetzen und den von Konstruktivisten geforderten grundlegenden Perspektivwechsel vorzunehmen. rn
Resumo:
The textural and compositional characteristics of the 400 m sequence of Pleistocene wackestones and packstones intersected at Ocean Drilling Program (ODP) Site 820 reflect deposition controlled by fluctuations in sea-level, and by variations in the rate of sediment supply. The development of an effective reefal barrier adjacent to Site 820, between 760 k.y. and 1.01 Ma, resulted in a marked reduction in sediment accumulation rates on the central Great Barrier Reef outermost shelf and upper slope. This marked change corresponds with the transition from sigmoidal prograding seismic geometry in the lower 254 m of the sequence, to aggradational geometry in the top 146 m. The reduction in the rate of sediment accumulation that followed development of the reefal barrier also caused a fundamental change in the way in which fluctuations in sea-level controlled sediment deposition. In the lower, progradational portion of the sequence, sea-level cyclicity is represented by superimposed coarsening-upward cycles. Although moderately calcareous throughout (mostly 35%-75% CaCO3), the depositional system acted in a similar manner to siliciclastic shelf depositional systems. Relative sea-level rises resulted in deposition of more condensed, less calcareous, fine, muddy wackestones at the base of each cycle. Sea-level highstands resulted in increased sedimentation rates and greater influx of coarse bioclastic material. Continued high rates of sedimentation of both coarse bioclastic material and mixed carbonate and terrigenous mud marked falling and low sea-levels. This lower part of the sequence therefore is dominated by coarse packstones, with only thin wackestone intervals representing transgressions. In contrast, sea-level fluctuations following formation of an effective reefal barrier produced a markedly different sedimentary record. The more slowly deposited aggradational sequence is characterized by discrete thin interbeds of relatively coarse packstone within a predominantly fine wackestone sequence. These thin packstone beds resulted from relatively low sedimentation rates during falling and low sea-levels, with much higher rates of muddy sediment accumulation during rising and high sea-levels. The transition from progradational to aggradational sequence geometry therefore corresponds to a transition from a "siliciclastic-type" to a "carbonate-type" depositional system.
Resumo:
O trabalho desenvolvido analisa a Comunicação Social no contexto da internet e delineia novas metodologias de estudo para a área na filtragem de significados no âmbito científico dos fluxos de informação das redes sociais, mídias de notícias ou qualquer outro dispositivo que permita armazenamento e acesso a informação estruturada e não estruturada. No intento de uma reflexão sobre os caminhos, que estes fluxos de informação se desenvolvem e principalmente no volume produzido, o projeto dimensiona os campos de significados que tal relação se configura nas teorias e práticas de pesquisa. O objetivo geral deste trabalho é contextualizar a área da Comunicação Social dentro de uma realidade mutável e dinâmica que é o ambiente da internet e fazer paralelos perante as aplicações já sucedidas por outras áreas. Com o método de estudo de caso foram analisados três casos sob duas chaves conceituais a Web Sphere Analysis e a Web Science refletindo os sistemas de informação contrapostos no quesito discursivo e estrutural. Assim se busca observar qual ganho a Comunicação Social tem no modo de visualizar seus objetos de estudo no ambiente das internet por essas perspectivas. O resultado da pesquisa mostra que é um desafio para o pesquisador da Comunicação Social buscar novas aprendizagens, mas a retroalimentação de informação no ambiente colaborativo que a internet apresenta é um caminho fértil para pesquisa, pois a modelagem de dados ganha corpus analítico quando o conjunto de ferramentas promovido e impulsionado pela tecnologia permite isolar conteúdos e possibilita aprofundamento dos significados e suas relações.
Resumo:
El artículo analiza la figura del prosumidor desde los estudios visuales a partir de la combinación de la teoría de los actos de habla y los nuevos medios. El objetivo es evaluar si la distinción entre productores y consumidores, estrategias y tácticas de Michel de Certeau continúa siendo operativa en las interfaces gráficas de la cultura global de la información de Scott Lash. Para ello distingue dos tipos de performatividad de los actos de habla: la performatividad top-down del software, y la bottom-up de los juegos del lenguaje y las formas de vida. Estos tipos se aplican al análisis del discurso de los eslóganes que aparecen en los sitios web de las iniciativas “open” y de economía colaborativa, ya que las primeras están dedicadas a la producción de bienes inmateriales y las segundas a la producción de bienes materiales. El desarrollo muestra cómo los dos tipos de performatividad transforman el análisis textual de los estudios literarios y cinematográficos en una metodología capaz de investigar acciones materiales, humanas y no humanas. Las conclusiones describen el surgimiento de nuevas convenciones narrativas de poder y control ajenas a la ficción que apuntan a una “DIY society”.
Resumo:
The aim of this report is to provide biomass estimates (AFDW, g.m-2 ) of the four main components of the benthic food web in the southern part of the Bay of St-Brieuc: suspension-feeders, deposit-feeders, herbivores and carnivores. Patterns in the environmental data (i.e., sedimentary characters) are first analysed, and then related to the spatial distribution of communities ; both approaches use classical ordination techniques (PCA, correspondence analysis). A second typology, based on a review of published litterature concerning coastal macrozoobenthos feeding, ascribes each taxonomic unit to a trophic group. Finally, quantitative results are thus given per biota ; suspension-feeders appear to dominate (in terms of biomass) most of fine-sand habitats of the studied area.
Collection-Level Subject Access in Aggregations of Digital Collections: Metadata Application and Use
Resumo:
Problems in subject access to information organization systems have been under investigation for a long time. Focusing on item-level information discovery and access, researchers have identified a range of subject access problems, including quality and application of metadata, as well as the complexity of user knowledge required for successful subject exploration. While aggregations of digital collections built in the United States and abroad generate collection-level metadata of various levels of granularity and richness, no research has yet focused on the role of collection-level metadata in user interaction with these aggregations. This dissertation research sought to bridge this gap by answering the question “How does collection-level metadata mediate scholarly subject access to aggregated digital collections?” This goal was achieved using three research methods: • in-depth comparative content analysis of collection-level metadata in three large-scale aggregations of cultural heritage digital collections: Opening History, American Memory, and The European Library • transaction log analysis of user interactions, with Opening History, and • interview and observation data on academic historians interacting with two aggregations: Opening History and American Memory. It was found that subject-based resource discovery is significantly influenced by collection-level metadata richness. The richness includes such components as: 1) describing collection’s subject matter with mutually-complementary values in different metadata fields, and 2) a variety of collection properties/characteristics encoded in the free-text Description field, including types and genres of objects in a digital collection, as well as topical, geographic and temporal coverage are the most consistently represented collection characteristics in free-text Description fields. Analysis of user interactions with aggregations of digital collections yields a number of interesting findings. Item-level user interactions were found to occur more often than collection-level interactions. Collection browse is initiated more often than search, while subject browse (topical and geographic) is used most often. Majority of collection search queries fall within FRBR Group 3 categories: object, concept, and place. Significantly more object, concept, and corporate body searches and less individual person, event and class of persons searches were observed in collection searches than in item searches. While collection search is most often satisfied by Description and/or Subjects collection metadata fields, it would not retrieve a significant proportion of collection records without controlled-vocabulary subject metadata (Temporal Coverage, Geographic Coverage, Subjects, and Objects), and free-text metadata (the Description field). Observation data shows that collection metadata records in Opening History and American Memory aggregations are often viewed. Transaction log data show a high level of engagement with collection metadata records in Opening History, with the total page views for collections more than 4 times greater than item page views. Scholars observed viewing collection records valued descriptive information on provenance, collection size, types of objects, subjects, geographic coverage, and temporal coverage information. They also considered the structured display of collection metadata in Opening History more useful than the alternative approach taken by other aggregations, such as American Memory, which displays only the free-text Description field to the end-user. The results extend the understanding of the value of collection-level subject metadata, particularly free-text metadata, for the scholarly users of aggregations of digital collections. The analysis of the collection metadata created by three large-scale aggregations provides a better understanding of collection-level metadata application patterns and suggests best practices. This dissertation is also the first empirical research contribution to test the FRBR model as a conceptual and analytic framework for studying collection-level subject access.
Resumo:
Recent years have seen an astronomical rise in SQL Injection Attacks (SQLIAs) used to compromise the confidentiality, authentication and integrity of organisations’ databases. Intruders becoming smarter in obfuscating web requests to evade detection combined with increasing volumes of web traffic from the Internet of Things (IoT), cloud-hosted and on-premise business applications have made it evident that the existing approaches of mostly static signature lack the ability to cope with novel signatures. A SQLIA detection and prevention solution can be achieved through exploring an alternative bio-inspired supervised learning approach that uses input of labelled dataset of numerical attributes in classifying true positives and negatives. We present in this paper a Numerical Encoding to Tame SQLIA (NETSQLIA) that implements a proof of concept for scalable numerical encoding of features to a dataset attributes with labelled class obtained from deep web traffic analysis. In the numerical attributes encoding: the model leverages proxy in the interception and decryption of web traffic. The intercepted web requests are then assembled for front-end SQL parsing and pattern matching by applying traditional Non-Deterministic Finite Automaton (NFA). This paper is intended for a technique of numerical attributes extraction of any size primed as an input dataset to an Artificial Neural Network (ANN) and statistical Machine Learning (ML) algorithms implemented using Two-Class Averaged Perceptron (TCAP) and Two-Class Logistic Regression (TCLR) respectively. This methodology then forms the subject of the empirical evaluation of the suitability of this model in the accurate classification of both legitimate web requests and SQLIA payloads.
Resumo:
El desarrollo tecnológico y la expansión de las formas de comunicación en Colombia, no solo trajeron consigo grandes beneficios, sino también nuevos retos para el Estado Moderno. Actualmente, la oferta de espacios de difusión de propaganda electoral ha aumentado, mientras persiste un marco legal diseñado para los medios de comunicación del Siglo XX. Por tanto, este trabajo no solo realiza un diagnóstico de los actuales mecanismos de control administrativo sobre la propaganda electoral en Internet, sino también propone unos mecanismos que garanticen los principios de la actividad electoral, siendo esta la primera propuesta en Colombia. Por el poco estudio del tema, su alcance es exploratorio, se basa en un enfoque jurídico-institucional. Se utilizaron métodos cualitativos de recolección de datos (trabajo de archivo y entrevistas) y de análisis (tipologías, comparaciones, exegesis del marco legal), pero también elementos cuantitativos como análisis estadísticos.
Resumo:
Intelligent systems are currently inherent to the society, supporting a synergistic human-machine collaboration. Beyond economical and climate factors, energy consumption is strongly affected by the performance of computing systems. The quality of software functioning may invalidate any improvement attempt. In addition, data-driven machine learning algorithms are the basis for human-centered applications, being their interpretability one of the most important features of computational systems. Software maintenance is a critical discipline to support automatic and life-long system operation. As most software registers its inner events by means of logs, log analysis is an approach to keep system operation. Logs are characterized as Big data assembled in large-flow streams, being unstructured, heterogeneous, imprecise, and uncertain. This thesis addresses fuzzy and neuro-granular methods to provide maintenance solutions applied to anomaly detection (AD) and log parsing (LP), dealing with data uncertainty, identifying ideal time periods for detailed software analyses. LP provides deeper semantics interpretation of the anomalous occurrences. The solutions evolve over time and are general-purpose, being highly applicable, scalable, and maintainable. Granular classification models, namely, Fuzzy set-Based evolving Model (FBeM), evolving Granular Neural Network (eGNN), and evolving Gaussian Fuzzy Classifier (eGFC), are compared considering the AD problem. The evolving Log Parsing (eLP) method is proposed to approach the automatic parsing applied to system logs. All the methods perform recursive mechanisms to create, update, merge, and delete information granules according with the data behavior. For the first time in the evolving intelligent systems literature, the proposed method, eLP, is able to process streams of words and sentences. Essentially, regarding to AD accuracy, FBeM achieved (85.64+-3.69)%; eGNN reached (96.17+-0.78)%; eGFC obtained (92.48+-1.21)%; and eLP reached (96.05+-1.04)%. Besides being competitive, eLP particularly generates a log grammar, and presents a higher level of model interpretability.
Resumo:
Increasingly, distributed systems are being used to host all manner of applications. While these platforms provide a relatively cheap and effective means of executing applications, so far there has been little work in developing tools and utilities that can help application developers understand problems with the supporting software, or the executing applications. To fully understand why an application executing on a distributed system is not behaving as would be expected it is important that not only the application, but also the underlying middleware, and the operating system are analysed too, otherwise issues could be missed and certainly overall performance profiling and fault diagnoses would be harder to understand. We believe that one approach to profiling and the analysis of distributed systems and the associated applications is via the plethora of log files generated at runtime. In this paper we report on a system (Slogger), that utilises various emerging Semantic Web technologies to gather the heterogeneous log files generated by the various layers in a distributed system and unify them in common data store. Once unified, the log data can be queried and visualised in order to highlight potential problems or issues that may be occurring in the supporting software or the application itself.
Resumo:
OBJECTIVE: To characterize PubMed usage over a typical day and compare it to previous studies of user behavior on Web search engines. DESIGN: We performed a lexical and semantic analysis of 2,689,166 queries issued on PubMed over 24 consecutive hours on a typical day. MEASUREMENTS: We measured the number of queries, number of distinct users, queries per user, terms per query, common terms, Boolean operator use, common phrases, result set size, MeSH categories, used semantic measurements to group queries into sessions, and studied the addition and removal of terms from consecutive queries to gauge search strategies. RESULTS: The size of the result sets from a sample of queries showed a bimodal distribution, with peaks at approximately 3 and 100 results, suggesting that a large group of queries was tightly focused and another was broad. Like Web search engine sessions, most PubMed sessions consisted of a single query. However, PubMed queries contained more terms. CONCLUSION: PubMed's usage profile should be considered when educating users, building user interfaces, and developing future biomedical information retrieval systems.
Resumo:
High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two-hybrid, proteomics and metabolomics datasets, but it is also extendable to other datasets. IIS is freely available online at: http://www.lge.ibi.unicamp.br/lnbio/IIS/.
Resumo:
In this paper, we compare three residuals to assess departures from the error assumptions as well as to detect outlying observations in log-Burr XII regression models with censored observations. These residuals can also be used for the log-logistic regression model, which is a special case of the log-Burr XII regression model. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and the empirical distribution of each residual is displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended to the modified martingale-type residual in log-Burr XII regression models with censored data.
Resumo:
In a sample of censored survival times, the presence of an immune proportion of individuals who are not subject to death, failure or relapse, may be indicated by a relatively high number of individuals with large censored survival times. In this paper the generalized log-gamma model is modified for the possibility that long-term survivors may be present in the data. The model attempts to separately estimate the effects of covariates on the surviving fraction, that is, the proportion of the population for which the event never occurs. The logistic function is used for the regression model of the surviving fraction. Inference for the model parameters is considered via maximum likelihood. Some influence methods, such as the local influence and total local influence of an individual are derived, analyzed and discussed. Finally, a data set from the medical area is analyzed under the log-gamma generalized mixture model. A residual analysis is performed in order to select an appropriate model.