917 resultados para Web Log Data


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The sharing of near real-time traceability knowledge in supply chains plays a central role in coordinating business operations and is a key driver for their success. However before traceability datasets received from external partners can be integrated with datasets generated internally within an organisation, they need to be validated against information recorded for the physical goods received as well as against bespoke rules defined to ensure uniformity, consistency and completeness within the supply chain. In this paper, we present a knowledge driven framework for the runtime validation of critical constraints on incoming traceability datasets encapuslated as EPCIS event-based linked pedigrees. Our constraints are defined using SPARQL queries and SPIN rules. We present a novel validation architecture based on the integration of Apache Storm framework for real time, distributed computation with popular Semantic Web/Linked data libraries and exemplify our methodology on an abstraction of the pharmaceutical supply chain.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Supply chains comprise of complex processes spanning across multiple trading partners. The various operations involved generate large number of events that need to be integrated in order to enable internal and external traceability. Further, provenance of artifacts and agents involved in the supply chain operations is now a key traceability requirement. In this paper we propose a Semantic web/Linked data powered framework for the event based representation and analysis of supply chain activities governed by the EPCIS specification. We specifically show how a new EPCIS event type called "Transformation Event" can be semantically annotated using EEM - The EPCIS Event Model to generate linked data, that can be exploited for internal event based traceability in supply chains involving transformation of products. For integrating provenance with traceability, we propose a mapping from EEM to PROV-O. We exemplify our approach on an abstraction of the production processes that are part of the wine supply chain.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With advances in science and technology, computing and business intelligence (BI) systems are steadily becoming more complex with an increasing variety of heterogeneous software and hardware components. They are thus becoming progressively more difficult to monitor, manage and maintain. Traditional approaches to system management have largely relied on domain experts through a knowledge acquisition process that translates domain knowledge into operating rules and policies. It is widely acknowledged as a cumbersome, labor intensive, and error prone process, besides being difficult to keep up with the rapidly changing environments. In addition, many traditional business systems deliver primarily pre-defined historic metrics for a long-term strategic or mid-term tactical analysis, and lack the necessary flexibility to support evolving metrics or data collection for real-time operational analysis. There is thus a pressing need for automatic and efficient approaches to monitor and manage complex computing and BI systems. To realize the goal of autonomic management and enable self-management capabilities, we propose to mine system historical log data generated by computing and BI systems, and automatically extract actionable patterns from this data. This dissertation focuses on the development of different data mining techniques to extract actionable patterns from various types of log data in computing and BI systems. Four key problems—Log data categorization and event summarization, Leading indicator identification , Pattern prioritization by exploring the link structures , and Tensor model for three-way log data are studied. Case studies and comprehensive experiments on real application scenarios and datasets are conducted to show the effectiveness of our proposed approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The results of inductively coupled argon plasma (ICAP) chemical analyses carried out on some 300 core samples from Ocean Drilling Program Sites 834, 835, 838, and 839 are presented. These sites were drilled during Leg 135 in the Lau Basin. The data are compared with total gamma (SGR) wireline logs at Sites 834 and 835. Pliocene (Piacenzian) nannofossil Zone CN12, which has been identified at Sites 834 and 835, is examined in detail using spectral analyses on core and wireline logs. The potassium and calcium concentrations from the core material were used to calculate an objective depth-to-geological time stretching function, which improved the stratigraphic correlation between sites. The integrated use of chemical analyses, wireline-log data and paleomagnetic results improved confidence in the correlations obtained. Although no significant sedimentation periodicities were obtained from the two sites, a common concentration of energy between 30 and 60 k.y. was recorded.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the leading motivations behind the multilingual semantic web is to make resources accessible digitally in an online global multilingual context. Consequently, it is fundamental for knowledge bases to find a way to manage multilingualism and thus be equipped with those procedures for its conceptual modelling. In this context, the goal of this paper is to discuss how common-sense knowledge and cultural knowledge are modelled in a multilingual framework. More particularly, multilingualism and conceptual modelling are dealt with from the perspective of FunGramKB, a lexico-conceptual knowledge base for natural language understanding. This project argues for a clear division between the lexical and the conceptual dimensions of knowledge. Moreover, the conceptual layer is organized into three modules, which result from a strong commitment towards capturing semantic knowledge (Ontology), procedural knowledge (Cognicon) and episodic knowledge (Onomasticon). Cultural mismatches are discussed and formally represented at the three conceptual levels of FunGramKB.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the Internet of Things becoming more and more popular, and a prediction that there will be more than 50 million devices connected to the Internet in 2020, the quantity of IoT platforms on the market is rapidly growing. Facing so many platforms to choose, the object of this thesis is to give some suggestions for reference by performing a quantitative comparison between two platforms: SensibleThings and Kaa. These two platforms have difference architectures so may suitable in different scenes. The comparison includes some measurement and evaluation under two designed scenarios and a general contrast in theory. Two scenarios cover cases of message delivery between two endpoints at different rates and multiple endpoints pushing log data continually. The result of measurement together with the theoretical analysis draw out the following conclusion. SensibleThings platform is more suitable for simple and small-scale message delivery between endpoints, like home environment with few devices. And Kaa platform is more suitable for large-scale and complicated application for data collection and processing, like meteorology field with huge amount of sensors and data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Problems in subject access to information organization systems have been under investigation for a long time. Focusing on item-level information discovery and access, researchers have identified a range of subject access problems, including quality and application of metadata, as well as the complexity of user knowledge required for successful subject exploration. While aggregations of digital collections built in the United States and abroad generate collection-level metadata of various levels of granularity and richness, no research has yet focused on the role of collection-level metadata in user interaction with these aggregations. This dissertation research sought to bridge this gap by answering the question “How does collection-level metadata mediate scholarly subject access to aggregated digital collections?” This goal was achieved using three research methods: • in-depth comparative content analysis of collection-level metadata in three large-scale aggregations of cultural heritage digital collections: Opening History, American Memory, and The European Library • transaction log analysis of user interactions, with Opening History, and • interview and observation data on academic historians interacting with two aggregations: Opening History and American Memory. It was found that subject-based resource discovery is significantly influenced by collection-level metadata richness. The richness includes such components as: 1) describing collection’s subject matter with mutually-complementary values in different metadata fields, and 2) a variety of collection properties/characteristics encoded in the free-text Description field, including types and genres of objects in a digital collection, as well as topical, geographic and temporal coverage are the most consistently represented collection characteristics in free-text Description fields. Analysis of user interactions with aggregations of digital collections yields a number of interesting findings. Item-level user interactions were found to occur more often than collection-level interactions. Collection browse is initiated more often than search, while subject browse (topical and geographic) is used most often. Majority of collection search queries fall within FRBR Group 3 categories: object, concept, and place. Significantly more object, concept, and corporate body searches and less individual person, event and class of persons searches were observed in collection searches than in item searches. While collection search is most often satisfied by Description and/or Subjects collection metadata fields, it would not retrieve a significant proportion of collection records without controlled-vocabulary subject metadata (Temporal Coverage, Geographic Coverage, Subjects, and Objects), and free-text metadata (the Description field). Observation data shows that collection metadata records in Opening History and American Memory aggregations are often viewed. Transaction log data show a high level of engagement with collection metadata records in Opening History, with the total page views for collections more than 4 times greater than item page views. Scholars observed viewing collection records valued descriptive information on provenance, collection size, types of objects, subjects, geographic coverage, and temporal coverage information. They also considered the structured display of collection metadata in Opening History more useful than the alternative approach taken by other aggregations, such as American Memory, which displays only the free-text Description field to the end-user. The results extend the understanding of the value of collection-level subject metadata, particularly free-text metadata, for the scholarly users of aggregations of digital collections. The analysis of the collection metadata created by three large-scale aggregations provides a better understanding of collection-level metadata application patterns and suggests best practices. This dissertation is also the first empirical research contribution to test the FRBR model as a conceptual and analytic framework for studying collection-level subject access.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

No âmbito das obrigações que o Estado Português tem em garantir a segurança dos seus cidadãos, é efetuada, em países ou regiões onde há comunidades nacionais, uma avaliação quanto ao risco de vida para os cidadãos nacionais que aí residam ou aí se encontrem, entendendo-se, à luz do direito internacional consuetudinário, que é legítima a eventual execução de intervenção militar de extração de nacionais não combatentes dessas zonas de risco. Este trabalho pretende contribuir para uma reflexão sobre o apoio geoespacial a uma operação de extração de cidadãos nacionais não combatentes, que se denomina NEO (non-combatant evacuation operation). Dada a importância do conhecimento holístico do ambiente operacional para os comandantes militares, os Sistemas de Informação Geográfica desempenham um papel fundamental em termos da análise, contextualização e visualização da informação geoespacial, sendo um precioso sistema de apoio à decisão. A tomada de decisão é efetuada com os contributos de várias áreas de conhecimento, sendo fundamental que o planeamento seja efetuado com base na mesma informação geoespacial, evitando a existência de uma multitude de dados geoespaciais nem sempre coerentes, atualizados e acessíveis a todos os que deles necessitam, pretendendo-se com este trabalho fornecer um contributo para resolver este problema. Aborda-se também a escassez dos dados geográficos nas zonas em que este tipo de operações se poderá desenrolar, a pertinência e a adequabilidade de utilização de dados espaciais abertos, os modelos de dados, bem como a forma como a informação pode ser disponibilizada.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El propósito de esta monografía es comprender cuál ha sido el rol de la Unión Africana (UA), dentro de la misión de paz AMISOM en el periodo de 2007- 2013. Por ello, el trabajo abarca aspectos geopolíticos e históricos, que han influido en la configuración del conflicto armado de Somalía y que han llevado progresivamente a la creación, evolución e implementación de mecanismos como las misiones de paz. Además, se abarcan los planteamientos del neo-funcionalismo y el neo-regionalismo para comprender las estructuras y las dinámicas propias de la UA y así, comprender la naturaleza tanto de sus acciones, como de sus propósitos; propósitos que aclaman el fomento del panafricanismo. Desde aquí se puede entender como su rol ha contribuido con el crecimiento del mercado de la industria militar en la región, a costa de la responsabilidad de proteger. Por último, se concluye que dichas dinámicas han llevado a la creación de comunidades de inseguridad.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Image-to-image (i2i) translation networks can generate fake images beneficial for many applications in augmented reality, computer graphics, and robotics. However, they require large scale datasets and high contextual understanding to be trained correctly. In this thesis, we propose strategies for solving these problems, improving performances of i2i translation networks by using domain- or physics-related priors. The thesis is divided into two parts. In Part I, we exploit human abstraction capabilities to identify existing relationships in images, thus defining domains that can be leveraged to improve data usage efficiency. We use additional domain-related information to train networks on web-crawled data, hallucinate scenarios unseen during training, and perform few-shot learning. In Part II, we instead rely on physics priors. First, we combine realistic physics-based rendering with generative networks to boost outputs realism and controllability. Then, we exploit naive physical guidance to drive a manifold reorganization, which allowed generating continuous conditions such as timelapses.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This research is investigating the claim that Change Data Capture (CDC) technologies capture data changes in real-time. Based on theory, our hypothesis states that real-time CDC is not achievable with traditional approaches (log scanning, triggers and timestamps). Traditional approaches to CDC require a resource to be polled, which prevents true real-time CDC. We propose an approach to CDC that encapsulates the data source with a set of web services. These web services will propagate the changes to the targets and eliminate the need for polling. Additionally we propose a framework for CDC technologies that allow changes to flow from source to target. This paper discusses current CDC technologies and presents the theory about why they are unable to deliver changes in real-time. Following, we discuss our web service approach to CDC and accompanying framework, explaining how they can produce real-time CDC. The paper concludes with a discussion on the research required to investigate the real-time capabilities of CDC technologies. © 2010 IEEE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two-hybrid, proteomics and metabolomics datasets, but it is also extendable to other datasets. IIS is freely available online at: http://www.lge.ibi.unicamp.br/lnbio/IIS/.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper proposes a regression model considering the modified Weibull distribution. This distribution can be used to model bathtub-shaped failure rate functions. Assuming censored data, we consider maximum likelihood and Jackknife estimators for the parameters of the model. We derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and we also present some ways to perform global influence. Besides, for different parameter settings, sample sizes and censoring percentages, various simulations are performed and the empirical distribution of the modified deviance residual is displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended for a martingale-type residual in log-modified Weibull regression models with censored data. Finally, we analyze a real data set under log-modified Weibull regression models. A diagnostic analysis and a model checking based on the modified deviance residual are performed to select appropriate models. (c) 2008 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dissertation presented to obtain the Ph.D degree in Bioinformatics

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfilment of the requirements for the degree of Master in Computer Science