937 resultados para DATA INTEGRATION


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Implementation of GEOSS/GMES initiative requires creation and integration of service providers, most of which provide geospatial data output from Grid system to interactive user. In this paper approaches of DOS- centers (service providers) integration used in Ukrainian segment of GEOSS/GMES will be considered and template solutions for geospatial data visualization subsystems will be suggested. Developed patterns are implemented in DOS center of Space Research Institute of National Academy of Science of Ukraine and National Space Agency of Ukraine (NASU-NSAU).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dimensionality reduction is a very important step in the data mining process. In this paper, we consider feature extraction for classification tasks as a technique to overcome problems occurring because of “the curse of dimensionality”. Three different eigenvector-based feature extraction approaches are discussed and three different kinds of applications with respect to classification tasks are considered. The summary of obtained results concerning the accuracy of classification schemes is presented with the conclusion about the search for the most appropriate feature extraction method. The problem how to discover knowledge needed to integrate the feature extraction and classification processes is stated. A decision support system to aid in the integration of the feature extraction and classification processes is proposed. The goals and requirements set for the decision support system and its basic structure are defined. The means of knowledge acquisition needed to build up the proposed system are considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Use of modern object-oriented methods of designing of information systems (IS) both descriptions of interrelations IS and automated with its help business-processes of the enterprises leads to necessity of construction uniform complete IS on the basis of set of local models of such system. As a result of use of such approach there are the contradictions caused by inconsistency of actions of separate developers IS with each other and that is much more important, inconsistency of the points of view of separate users IS. Besides similar contradictions arise while in service IS at the enterprise because of constant change separate business- processes of the enterprise. It is necessary to note also, that now overwhelming majority IS is developed and maintained as set of separate functional modules. Each of such modules can function as independent IS. However the problem of integration of separate functional modules in uniform system can lead to a lot of problems. Among these problems it is possible to specify, for example, presence in modules of functions which are not used by the enterprise to destination, to complexity of information and program integration of modules of various manufacturers, etc. In most cases these contradictions and the reasons, their caused, are consequence of primary representation IS as equilibrium steady system. In work [1] representation IS as dynamic multistable system which is capable to carry out following actions has been considered:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The manufacturing industry faces many challenges such as reducing time-to-market and cutting costs. In order to meet these increasing demands, effective methods are need to support the early product development stages by bridging the gap of communicating early design ideas and the evaluation of manufacturing performance. This paper introduces methods of linking design and manufacturing domains using disparate technologies. The combined technologies include knowledge management supporting for product lifecycle management systems, Enterprise Resource Planning (ERP) systems, aggregate process planning systems, workflow management and data exchange formats. A case study has been used to demonstrate the use of these technologies, illustrated by adding manufacturing knowledge to generate alternative early process plan which are in turn used by an ERP system to obtain and optimise a rough-cut capacity plan. Copyright © 2010 Inderscience Enterprises Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parkinson's disease is a complex heterogeneous disorder with urgent need for disease-modifying therapies. Progress in successful therapeutic approaches for PD will require an unprecedented level of collaboration. At a workshop hosted by Parkinson's UK and co-organized by Critical Path Institute's (C-Path) Coalition Against Major Diseases (CAMD) Consortiums, investigators from industry, academia, government and regulatory agencies agreed on the need for sharing of data to enable future success. Government agencies included EMA, FDA, NINDS/NIH and IMI (Innovative Medicines Initiative). Emerging discoveries in new biomarkers and genetic endophenotypes are contributing to our understanding of the underlying pathophysiology of PD. In parallel there is growing recognition that early intervention will be key for successful treatments aimed at disease modification. At present, there is a lack of a comprehensive understanding of disease progression and the many factors that contribute to disease progression heterogeneity. Novel therapeutic targets and trial designs that incorporate existing and new biomarkers to evaluate drug effects independently and in combination are required. The integration of robust clinical data sets is viewed as a powerful approach to hasten medical discovery and therapies, as is being realized across diverse disease conditions employing big data analytics for healthcare. The application of lessons learned from parallel efforts is critical to identify barriers and enable a viable path forward. A roadmap is presented for a regulatory, academic, industry and advocacy driven integrated initiative that aims to facilitate and streamline new drug trials and registrations in Parkinson's disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emergence of innovative and revolutionary Integration Technologies (IntTech) has highly influenced the local government authorities (LGAs) in their decision-making process. LGAs that plan to adopt such IntTech may consider this as a serious investment. Advocates, however, claim that such IntTech have emerged to overcome the integration problems at all levels (e.g. data, object and process). With the emergence of electronic government (e-Government), LGAs have turned to IntTech to fully automate and offer their services on-line and integrate their IT infrastructures. While earlier research on the adoption of IntTech has considered several factors (e.g. pressure, technological, support, and financial), inadequate attention and resources have been applied in systematically investigating the individual, decision and organisational context factors, influencing top management's decisions for adopting IntTech in LGAs. It is a highly considered phenomenon that the success of an organisation's operations relies heavily on understanding an individual's attitudes and behaviours, the surrounding context and the type of decisions taken. Based on empirical evidence gathered through two intensive case studies, this paper attempts to investigate the factors that influence decision makers while adopting IntTech. The findings illustrate two different doctrines - one inclined and receptive towards taking risky decisions, the other disinclined. Several underlying rationales can be attributed to such mind-sets in LGAs. The authors aim to contribute to the body of knowledge by exploring the factors influencing top management's decision-making process while adopting IntTech vital for facilitating LGAs' operational reforms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Electronic Product Code Information Service (EPCIS) is an EPCglobal standard, that aims to bridge the gap between the physical world of RFID1 tagged artifacts, and information systems that enable their tracking and tracing via the Electronic Product Code (EPC). Central to the EPCIS data model are "events" that describe specific occurrences in the supply chain. EPCIS events, recorded and registered against EPC tagged artifacts, encapsulate the "what", "when", "where" and "why" of these artifacts as they flow through the supply chain. In this paper we propose an ontological model for representing EPCIS events on the Web of data. Our model provides a scalable approach for the representation, integration and sharing of EPCIS events as linked data via RESTful interfaces, thereby facilitating interoperability, collaboration and exchange of EPC related data across enterprises on a Web scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The focus of this study is on the governance decisions in a concurrent channels context, in the case of uncertainty. The study examines how a firm chooses to deploy its sales force in times of uncertainty, and the subsequent performance outcome of those deployment choices. The theoretical framework is based on multiple theories of governance, including transaction cost analysis (TCA), agency theory, and institutional economics. Three uncertainty variables are investigated in this study. The first two are demand and competitive uncertainty which are considered to be industry-level market uncertainty forms. The third uncertainty, political uncertainty, is chosen as it is an important dimension of institutional environments, capturing non-economic circumstances such as regulations and political systemic issues. The study employs longitudinal secondary data from a Thai hotel chain, comprising monthly observations from January 2007 – December 2012. This hotel chain has its operations in 4 countries, Thailand, the Philippines, United Arab Emirates – Dubai, and Egypt, all of which experienced substantial demand, competitive, and political uncertainty during the study period. This makes them ideal contexts for this study. Two econometric models, both deploying Newey-West estimations, are employed to test 13 hypotheses. The first model considers the relationship between uncertainty and governance. The second model is a version of Newey-West, using an Instrumental Variables (IV) estimator and a Two-Stage Least Squares model (2SLS), to test the direct effect of uncertainty on performance and the moderating effect of governance on the relationship between uncertainty and performance. The observed relationship between uncertainty and governance observed follows a core prediction of TCA; that vertical integration is the preferred choice of governance when uncertainty rises. As for the subsequent performance outcomes, the results corroborate that uncertainty has a negative effect on performance. Importantly, the findings show that becoming more vertically integrated cannot help moderate the effect of demand and competitive uncertainty, but can significantly moderate the effect of political uncertainty. These findings have significant theoretical and practical implications, and extend our knowledge of the impact on uncertainty significantly, as well as bringing an institutional perspective to TCA. Further, they offer managers novel insight into the nature of different types of uncertainty, their impact on performance, and how channel decisions can mitigate these impacts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The central proposition of this thesis is that there are key benefits to examining leadership perceptions as an attitude towards the leader. In particular, it is argued that doing so can provide an enhanced understanding of leadership perceptions and therefore advance theory in this area. To provide empirical support for this theoretcial integration, the current research focused on one of the most popular leadership theories, leader-member exchange (LMX), and demonstrated how the concept of attitude strength could advance understanding of how and when LMX influenced employee job performance. Although the measurement of LMX requires employees to provide a cognitive evaluation of their relationship with their leader, previous research has, to date, not considered this evaluation to be an attitude. This thesis provides a justification for doing so and develops two novel constructs: LMX importance and LMX ambivalence. Both of these variables are argued to represent previously unconsidered facets of the LMX relationship, which, according to attitude theory, provide a more multifaceted understanding of leadership perceptions than previously envisaged. Such an understanding can provide a more detailed understanding of how such perceptions influence outcomes. Two studies provided an empirical test of the above reasoning. Study 1, a longitudinal field study, demonstrated initial support for many of the hypotheses. LMX amivalence was shown to lead to poorer task performance and organisational citizenship behaviour, mediated by the experience of negative affect. Evidence was also found for the moderating effect of LMX importance, although felt obligations was not found to mediate this moderated effect. While Study 1 used project groups as its participants, Study 2 provided a first test of the construct in an organisational setting; with three companies proving data. Again, strong support was found for the negative effects of LMX ambivalence on employee outcomes, with evidence also found for the role of perceived organisational support in mitigating these negative effects. Support was also found for the moderated mediation hypothesis related to LMX importance, although this was only found in the largest organisation sample. Some of the main theoretical and methodological implications of viewing leadership perceptions as attitudes to the wider leadership area were discussed. The cross-fertilisation of research from the attitudes literature to understanding leadership perceptions provides new insights into leadership processes and potential avenues for further research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using plant level data from a global survey with multiple time frames, one begun in the late 1990s, this paper introduces measures of supply chain integration and discusses the dynamic relationship between the level of integration and a set of internal and external performance measurements. Specifically, data from Hungary, The Netherlands and The People’s Republic of China are used in the analyses. The time frames considered range from the late 1990s till 2009, encompassing major changes and transitions. Our results seem to indicate that SCI has an underlying structure of four sets of indicators, namely: (1) delivery frequency from the supplier or to the customer; (2) sharing internal processes with suppliers; (3) sharing internal processes with buyers and (4) joint facility location with partners. The differences between groups in terms of several performance measures proved to be small, being mostly statistically insignificant - but looking at the ANOVA table we can conclude that in this sample of companies those having joint location with their partners seem to outperform others.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. ^ This thesis describes a heterogeneous database system being developed at High-performance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii) a framework for intelligent computing and communication on the Internet applying the concepts of our work. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The primary aim of this dissertation is to develop data mining tools for knowledge discovery in biomedical data when multiple (homogeneous or heterogeneous) sources of data are available. The central hypothesis is that, when information from multiple sources of data are used appropriately and effectively, knowledge discovery can be better achieved than what is possible from only a single source. ^ Recent advances in high-throughput technology have enabled biomedical researchers to generate large volumes of diverse types of data on a genome-wide scale. These data include DNA sequences, gene expression measurements, and much more; they provide the motivation for building analysis tools to elucidate the modular organization of the cell. The challenges include efficiently and accurately extracting information from the multiple data sources; representing the information effectively, developing analytical tools, and interpreting the results in the context of the domain. ^ The first part considers the application of feature-level integration to design classifiers that discriminate between soil types. The machine learning tools, SVM and KNN, were used to successfully distinguish between several soil samples. ^ The second part considers clustering using multiple heterogeneous data sources. The resulting Multi-Source Clustering (MSC) algorithm was shown to have a better performance than clustering methods that use only a single data source or a simple feature-level integration of heterogeneous data sources. ^ The third part proposes a new approach to effectively incorporate incomplete data into clustering analysis. Adapted from K-means algorithm, the Generalized Constrained Clustering (GCC) algorithm makes use of incomplete data in the form of constraints to perform exploratory analysis. Novel approaches for extracting constraints were proposed. For sufficiently large constraint sets, the GCC algorithm outperformed the MSC algorithm. ^ The last part considers the problem of providing a theme-specific environment for mining multi-source biomedical data. The database called PlasmoTFBM, focusing on gene regulation of Plasmodium falciparum, contains diverse information and has a simple interface to allow biologists to explore the data. It provided a framework for comparing different analytical tools for predicting regulatory elements and for designing useful data mining tools. ^ The conclusion is that the experiments reported in this dissertation strongly support the central hypothesis.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural and man-made disasters have gained attention at all levels of policy-making in recent years. Emergency management tasks are inherently complex and unpredictable, and often require coordination among multiple organizations across different levels and locations. Effectively managing various knowledge areas and the organizations involved has become a critical emergency management success factor. However, there is a general lack of understanding about how to describe and assess the complex nature of emergency management tasks and how knowledge integration can help managers improve emergency management task performance. ^ The purpose of this exploratory research was first, to understand how emergency management operations are impacted by tasks that are complex and inter-organizational and second, to investigate how knowledge integration as a particular knowledge management strategy can improve the efficiency and effectiveness of the emergency tasks. Three types of specific knowledge were considered: context-specific, technology-specific, and context-and-technology-specific. ^ The research setting was the Miami-Dade Emergency Operations Center (EOC) and the study was based on the survey responses from the participants in past EOC activations related to their emergency tasks and knowledge areas. The data included task attributes related to complexity, knowledge area, knowledge integration, specificity of knowledge, and task performance. The data was analyzed using multiple linear regressions and path analyses, to (1) examine the relationships between task complexity, knowledge integration, and performance, (2) the moderating effects of each type of specific knowledge on the relationship between task complexity and performance, and (3) the mediating role of knowledge integration. ^ As per theory-based propositions, the results indicated that overall component complexity and interactive complexity tend to have a negative effect on task performance. But surprisingly, procedural rigidity tended to have a positive effect on performance in emergency management tasks. Also as per our expectation, knowledge integration had a positive relationship with task performance. Interestingly, the moderating effects of each type of specific knowledge on the relationship between task complexity and performance were varied and the extent of mediation of knowledge integration depended on the dimension of task complexity. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The No Child Left Behind Act of 2001 (NCLB) brought many significant changes to American schools including accessibility to technology. Through an extensive literature review of the relationship between technology leadership and student achievement, five major themes emerged from data that support the need for more effective computer-based education in schools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation examines the relationship between the degree of openness to international trade and the level of growth experienced by a country. More precisely, it explores how trade liberalization and economic integration can lead to specialization in production, affect national levels of welfare, productivity, and competition and at the same time reinforce deflationary efforts. A large part of this investigation is carried out using industry-level data from Spain. ^