878 resultados para Data-Information-Knowledge Chain
Resumo:
* The work is partially supported by Grant no. NIP917 of the Ministry of Science and Education – Republic of Bulgaria.
Resumo:
A major drawback of artificial neural networks is their black-box character. Therefore, the rule extraction algorithm is becoming more and more important in explaining the extracted rules from the neural networks. In this paper, we use a method that can be used for symbolic knowledge extraction from neural networks, once they have been trained with desired function. The basis of this method is the weights of the neural network trained. This method allows knowledge extraction from neural networks with continuous inputs and output as well as rule extraction. An example of the application is showed. This example is based on the extraction of average load demand of a power plant.
Resumo:
The aim was to develop an archive containing detailed description of church bells. As an object of cultural heritage the bell has general properties such as geometric dimensions, weight, sound of each of the bells, the pitch of the tone as well as acoustical diagrams obtained using contemporary equipment. The audio, photo and video archive is developed by using advanced technologies for analysis, reservation and data protection.
Resumo:
The aim of this paper is to explore the management of information in an aerospace manufacturer's supply chain by analysing supply chain disruption risks. The social network perspective will be used to examine the flows of information in the supply chain. The examination of information flows will also be explored in terms of push and pull information management. The supply chain risk management (SCRM) strategy is to assess the management of information that allows companies to gather information which will allow them to mitigate that risk before any disruption to the supply chain occurs. There is a shortage of models in analysing the supply chain risk associated with information flows, possibly due to the omission of appropriate modelling techniques in this area (Tang and Nurmaya, 2011). This paper uses an exploratory case study consisting of a multi method qualitative approach using fifteen interviews and four focus groups.
Resumo:
The Electronic Product Code Information Service (EPCIS) is an EPCglobal standard, that aims to bridge the gap between the physical world of RFID1 tagged artifacts, and information systems that enable their tracking and tracing via the Electronic Product Code (EPC). Central to the EPCIS data model are "events" that describe specific occurrences in the supply chain. EPCIS events, recorded and registered against EPC tagged artifacts, encapsulate the "what", "when", "where" and "why" of these artifacts as they flow through the supply chain. In this paper we propose an ontological model for representing EPCIS events on the Web of data. Our model provides a scalable approach for the representation, integration and sharing of EPCIS events as linked data via RESTful interfaces, thereby facilitating interoperability, collaboration and exchange of EPC related data across enterprises on a Web scale.
Resumo:
In the future, competitors will have more and more opportunities to buy the same information; therefore the companies’ competitiveness will not primarily depend on how much information they possess, but rather on how they can “translate” it to their own language. This study aims to examine those factors that have the most significant impact on the degree to which market studies are utilised by companies. Most of the work in this area has studied the use of information in strategic decisions a priori. This paper — while reflecting on the findings of research on organisational theories of information processing — aims to bridge this gap. It proposes and tests a new conceptual framework that examines the use of managerial market research information in decision-making and knowledge creation within one single model. Collected survey data, including all the top-income business enterprises in Hungary indicate that market research findings are efficiently incorporated into the marketing information system only if the marketing manager has trust in the researcher, and believes that the market study is of high quality. Decision-makers are more likely to learn from market studies facilitating the resolution of some specific problem than descriptive studies of a more general nature.
Resumo:
The primary aim of this dissertation is to develop data mining tools for knowledge discovery in biomedical data when multiple (homogeneous or heterogeneous) sources of data are available. The central hypothesis is that, when information from multiple sources of data are used appropriately and effectively, knowledge discovery can be better achieved than what is possible from only a single source. ^ Recent advances in high-throughput technology have enabled biomedical researchers to generate large volumes of diverse types of data on a genome-wide scale. These data include DNA sequences, gene expression measurements, and much more; they provide the motivation for building analysis tools to elucidate the modular organization of the cell. The challenges include efficiently and accurately extracting information from the multiple data sources; representing the information effectively, developing analytical tools, and interpreting the results in the context of the domain. ^ The first part considers the application of feature-level integration to design classifiers that discriminate between soil types. The machine learning tools, SVM and KNN, were used to successfully distinguish between several soil samples. ^ The second part considers clustering using multiple heterogeneous data sources. The resulting Multi-Source Clustering (MSC) algorithm was shown to have a better performance than clustering methods that use only a single data source or a simple feature-level integration of heterogeneous data sources. ^ The third part proposes a new approach to effectively incorporate incomplete data into clustering analysis. Adapted from K-means algorithm, the Generalized Constrained Clustering (GCC) algorithm makes use of incomplete data in the form of constraints to perform exploratory analysis. Novel approaches for extracting constraints were proposed. For sufficiently large constraint sets, the GCC algorithm outperformed the MSC algorithm. ^ The last part considers the problem of providing a theme-specific environment for mining multi-source biomedical data. The database called PlasmoTFBM, focusing on gene regulation of Plasmodium falciparum, contains diverse information and has a simple interface to allow biologists to explore the data. It provided a framework for comparing different analytical tools for predicting regulatory elements and for designing useful data mining tools. ^ The conclusion is that the experiments reported in this dissertation strongly support the central hypothesis.^
Resumo:
This dissertation offers a critical international political economy (IPE) analysis of the ways in which consumer information has been governed throughout the formal history of consumer finance (1840 – present). Drawing primarily on the United States, this project problematizes the notion of consumer financial big data as a ‘new era’ by tracing its roots historically from late nineteenth century through to the present. Using a qualitative case study approach, this project applies a unique theoretical framework to three instances of governance in consumer credit big data. Throughout, the historically specific means used to govern consumer credit data are rooted in dominant ideas, institutions and material factors.
Resumo:
Part 14: Interoperability and Integration
Resumo:
I study how a larger party within a supply chain could use its superior knowledge about its partner, who is considered to be financially constrained, to help its partner gain access to cheap finance. In particular, I consider two scenarios: (i) Retailer intermediation in supplier finance and (ii) The Effectiveness of Supplier Buy Back Finance. In the fist chapter, I study how a large buyer could help small suppliers obtain financing for their operations. Especially in developing economies, traditional financing methods can be very costly or unavailable to such suppliers. In order to reduce channel costs, in recent years large buyers started to implement their own financing methods that intermediate between suppliers and financing institutions. In this paper, I analyze the role and efficiency of buyer intermediation in supplier financing. Building a game-theoretical model, I show that buyer intermediated financing can significantly improve supply chain performance. Using data from a large Chinese online retailer and through structural regression estimation based on the theoretical analysis, I demonstrate that buyer intermediation induces lower interest rates and wholesale prices, increases order quantities, and boosts supplier borrowing. The analysis also shows that the retailer systematically overestimates the consumer demand. Based on counterfactual analysis, I predict that the implementation of buyer intermediated financing for the online retailer in 2013 improved channel profits by 18.3%, yielding more than $68M projected savings. In the second chapter, I study a novel buy-back financing scheme employed by large manufacturers in some emerging markets. A large manufacturer can secure financing for its budget-constrained downstream partners by assuming a part of the risk for their inventory by committing to buy back some unsold units. Buy back commitment could help a small downstream party secure a bank loan and further induce a higher order quantity through better allocation of risk in the supply chain. However, such a commitment may undermine the supply chain performance as it imposes extra costs on the supplier incurred by the return of large or costly-to-handle items. I first theoretically analyze the buy-back financing contract employed by a leading Chinese automative manufacturer and some variants of this contracting scheme. In order to measure the effectiveness of buy-back financing contracts, I utilize contract and sales data from the company and structurally estimate the theoretical model. Through counterfactual analysis, I study the efficiency of various buy-back financing schemes and compare them to traditional financing methods. I find that buy-back contract agreements can improve channel efficiency significantly compared to simple contracts with no buy-back, whether the downstream retailer can secure financing on its own or not.
3D Surveying and Data Management towards the Realization of a Knowledge System for Cultural Heritage
Resumo:
The research activities involved the application of the Geomatic techniques in the Cultural Heritage field, following the development of two themes: Firstly, the application of high precision surveying techniques for the restoration and interpretation of relevant monuments and archaeological finds. The main case regards the activities for the generation of a high-fidelity 3D model of the Fountain of Neptune in Bologna. In this work, aimed to the restoration of the manufacture, both the geometrical and radiometrical aspects were crucial. The final product was the base of a 3D information system representing a shared tool where the different figures involved in the restoration activities shared their contribution in a multidisciplinary approach. Secondly, the arrangement of 3D databases for a Building Information Modeling (BIM) approach, in a process which involves the generation and management of digital representations of physical and functional characteristics of historical buildings, towards a so-called Historical Building Information Model (HBIM). A first application was conducted for the San Michele in Acerboli’s church in Santarcangelo di Romagna. The survey was performed by the integration of the classical and modern Geomatic techniques and the point cloud representing the church was used for the development of a HBIM model, where the relevant information connected to the building could be stored and georeferenced. A second application regards the domus of Obellio Firmo in Pompeii, surveyed by the integration of the classical and modern Geomatic techniques. An historical analysis permitted the definitions of phases and the organization of a database of materials and constructive elements. The goal is the obtaining of a federate model able to manage the different aspects: documental, analytic and reconstructive ones.
Resumo:
Congenital muscular dystrophy with laminin α2 chain deficiency (MDC1A) is one of the most severe forms of muscular disease and is characterized by severe muscle weakness and delayed motor milestones. The genetic basis of MDC1A is well known, yet the secondary mechanisms ultimately leading to muscle degeneration and subsequent connective tissue infiltration are not fully understood. In order to obtain new insights into the molecular mechanisms underlying MDC1A, we performed a comparative proteomic analysis of affected muscles (diaphragm and gastrocnemius) from laminin α2 chain-deficient dy(3K)/dy(3K) mice, using multidimensional protein identification technology combined with tandem mass tags. Out of the approximately 700 identified proteins, 113 and 101 proteins, respectively, were differentially expressed in the diseased gastrocnemius and diaphragm muscles compared with normal muscles. A large portion of these proteins are involved in different metabolic processes, bind calcium, or are expressed in the extracellular matrix. Our findings suggest that metabolic alterations and calcium dysregulation could be novel mechanisms that underlie MDC1A and might be targets that should be explored for therapy. Also, detailed knowledge of the composition of fibrotic tissue, rich in extracellular matrix proteins, in laminin α2 chain-deficient muscle might help in the design of future anti-fibrotic treatments. All MS data have been deposited in the ProteomeXchange with identifier PXD000978 (http://proteomecentral.proteomexchange.org/dataset/PXD000978).
Resumo:
To assess the completeness and reliability of the Information System on Live Births (Sinasc) data. A cross-sectional analysis of the reliability and completeness of Sinasc's data was performed using a sample of Live Birth Certificate (LBC) from 2009, related to births from Campinas, Southeast Brazil. For data analysis, hospitals were grouped according to category of service (Unified National Health System, private or both), 600 LBCs were randomly selected and the data were collected in LBC-copies through mothers and newborns' hospital records and by telephone interviews. The completeness of LBCs was evaluated, calculating the percentage of blank fields, and the LBCs agreement comparing the originals with the copies was evaluated by Kappa and intraclass correlation coefficients. The percentage of completeness of LBCs ranged from 99.8%-100%. For the most items, the agreement was excellent. However, the agreement was acceptable for marital status, maternal education and newborn infants' race/color, low for prenatal visits and presence of birth defects, and very low for the number of deceased children. The results showed that the municipality Sinasc is reliable for most of the studied variables. Investments in training of the professionals are suggested in an attempt to improve system capacity to support planning and implementation of health activities for the benefit of maternal and child population.
Resumo:
Despite a strong increase in research on seamounts and oceanic islands ecology and biogeography, many basic aspects of their biodiversity are still unknown. In the southwestern Atlantic, the Vitória-Trindade Seamount Chain (VTC) extends ca. 1,200 km offshore the Brazilian continental shelf, from the Vitória seamount to the oceanic islands of Trindade and Martin Vaz. For a long time, most of the biological information available regarded its islands. Our study presents and analyzes an extensive database on the VTC fish biodiversity, built on data compiled from literature and recent scientific expeditions that assessed both shallow to mesophotic environments. A total of 273 species were recorded, 211 of which occur on seamounts and 173 at the islands. New records for seamounts or islands include 191 reef fish species and 64 depth range extensions. The structure of fish assemblages was similar between islands and seamounts, not differing in species geographic distribution, trophic composition, or spawning strategies. Main differences were related to endemism, higher at the islands, and to the number of endangered species, higher at the seamounts. Since unregulated fishing activities are common in the region, and mining activities are expected to drastically increase in the near future (carbonates on seamount summits and metals on slopes), this unique biodiversity needs urgent attention and management.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física