937 resultados para DATA INTEGRATION


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Integrating information in the molecular biosciences involves more than the cross-referencing of sequences or structures. Experimental protocols, results of computational analyses, annotations and links to relevant literature form integral parts of this information, and impart meaning to sequence or structure. In this review, we examine some existing approaches to integrating information in the molecular biosciences. We consider not only technical issues concerning the integration of heterogeneous data sources and the corresponding semantic implications, but also the integration of analytical results. Within the broad range of strategies for integration of data and information, we distinguish between platforms and developments. We discuss two current platforms and six current developments, and identify what we believe to be their strengths and limitations. We identify key unsolved problems in integrating information in the molecular biosciences, and discuss possible strategies for addressing them including semantic integration using ontologies, XML as a data model, and graphical user interfaces as integrative environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, with the expansion of organizational scope and the tendency for outsourcing, there has been an increasing need for Business Process Integration (BPI), understood as the sharing of data and applications among business processes. The research efforts and development paths in BPI pursued by many academic groups and system vendors, targeting heterogeneous system integration, continue to face several conceptual and technological challenges. This article begins with a brief review of major approaches and emerging standards to address BPI. Further, we introduce a rule-driven messaging approach to BPI, which is based on the harmonization of messages in order to compose a new, often cross-organizational process. We will then introduce the design of a temporal first order language (Harmonized Messaging Calculus) that provides the formal foundation for general rules governing the business process execution. Definitions of the language terms, formulae, safety, and expressiveness are introduced and considered in detail.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional vegetation mapping methods use high cost, labour-intensive aerial photography interpretation. This approach can be subjective and is limited by factors such as the extent of remnant vegetation, and the differing scale and quality of aerial photography over time. An alternative approach is proposed which integrates a data model, a statistical model and an ecological model using sophisticated Geographic Information Systems (GIS) techniques and rule-based systems to support fine-scale vegetation community modelling. This approach is based on a more realistic representation of vegetation patterns with transitional gradients from one vegetation community to another. Arbitrary, though often unrealistic, sharp boundaries can be imposed on the model by the application of statistical methods. This GIS-integrated multivariate approach is applied to the problem of vegetation mapping in the complex vegetation communities of the Innisfail Lowlands in the Wet Tropics bioregion of Northeastern Australia. The paper presents the full cycle of this vegetation modelling approach including sampling sites, variable selection, model selection, model implementation, internal model assessment, model prediction assessments, models integration of discrete vegetation community models to generate a composite pre-clearing vegetation map, independent data set model validation and model prediction's scale assessments. An accurate pre-clearing vegetation map of the Innisfail Lowlands was generated (0.83r(2)) through GIS integration of 28 separate statistical models. This modelling approach has good potential for wider application, including provision of. vital information for conservation planning and management; a scientific basis for rehabilitation of disturbed and cleared areas; a viable method for the production of adequate vegetation maps for conservation and forestry planning of poorly-studied areas. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A complete workflow specification requires careful integration of many different process characteristics. Decisions must be made as to the definitions of individual activities, their scope, the order of execution that maintains the overall business process logic, the rules governing the discipline of work list scheduling to performers, identification of time constraints and more. The goal of this paper is to address an important issue in workflows modelling and specification, which is data flow, its modelling, specification and validation. Researchers have neglected this dimension of process analysis for some time, mainly focussing on structural considerations with limited verification checks. In this paper, we identify and justify the importance of data modelling in overall workflows specification and verification. We illustrate and define several potential data flow problems that, if not detected prior to workflow deployment may prevent the process from correct execution, execute process on inconsistent data or even lead to process suspension. A discussion on essential requirements of the workflow data model in order to support data validation is also given..

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many variables that are of interest in social science research are nominal variables with two or more categories, such as employment status, occupation, political preference, or self-reported health status. With longitudinal survey data it is possible to analyse the transitions of individuals between different employment states or occupations (for example). In the statistical literature, models for analysing categorical dependent variables with repeated observations belong to the family of models known as generalized linear mixed models (GLMMs). The specific GLMM for a dependent variable with three or more categories is the multinomial logit random effects model. For these models, the marginal distribution of the response does not have a closed form solution and hence numerical integration must be used to obtain maximum likelihood estimates for the model parameters. Techniques for implementing the numerical integration are available but are computationally intensive requiring a large amount of computer processing time that increases with the number of clusters (or individuals) in the data and are not always readily accessible to the practitioner in standard software. For the purposes of analysing categorical response data from a longitudinal social survey, there is clearly a need to evaluate the existing procedures for estimating multinomial logit random effects model in terms of accuracy, efficiency and computing time. The computational time will have significant implications as to the preferred approach by researchers. In this paper we evaluate statistical software procedures that utilise adaptive Gaussian quadrature and MCMC methods, with specific application to modeling employment status of women using a GLMM, over three waves of the HILDA survey.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multidimensional compound optimization is a new paradigm in the drug discovery process, yielding efficiencies during early stages and reducing attrition in the later stages of drug development. The success of this strategy relies heavily on understanding this multidimensional data and extracting useful information from it. This paper demonstrates how principled visualization algorithms can be used to understand and explore a large data set created in the early stages of drug discovery. The experiments presented are performed on a real-world data set comprising biological activity data and some whole-molecular physicochemical properties. Data visualization is a popular way of presenting complex data in a simpler form. We have applied powerful principled visualization methods, such as generative topographic mapping (GTM) and hierarchical GTM (HGTM), to help the domain experts (screening scientists, chemists, biologists, etc.) understand and draw meaningful decisions. We also benchmark these principled methods against relatively better known visualization approaches, principal component analysis (PCA), Sammon's mapping, and self-organizing maps (SOMs), to demonstrate their enhanced power to help the user visualize the large multidimensional data sets one has to deal with during the early stages of the drug discovery process. The results reported clearly show that the GTM and HGTM algorithms allow the user to cluster active compounds for different targets and understand them better than the benchmarks. An interactive software tool supporting these visualization algorithms was provided to the domain experts. The tool facilitates the domain experts by exploration of the projection obtained from the visualization algorithms providing facilities such as parallel coordinate plots, magnification factors, directional curvatures, and integration with industry standard software. © 2006 American Chemical Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The purpose of this study is to explore the nature of human resource management in publicly listed finance sector companies in Nepal. In particular, it explores the extent to which HR practice is integrated into organisational strategy and devolved to line management. Design/methodology/ approach: A structured interview was conducted with the senior executive responsible for human resource management in 26 commercial banks and insurance companies in Nepal. Findings: The degree of integration of HR practice appears to be increasing within this sector, but this is dependent on the maturity of the organisations. The devolvement of responsibility to line managers is at best partial, and in the case of the insurance companies, it is more out of necessity due to the absence of a strong central HR function. Research limitations/implications: The survey is inevitably based on a small sample; however this represents 90 per cent of the relevant population. The data suggest that Western HR is making inroads into more developed aspects of Nepalese business. Compared with Nepalese business as a whole, the financial sector appears relatively Westernised, although Nepal still lags India in its uptake of HR practices. Practical implications: It appears unlikely from a cultural perspective that the devolvement of responsibility will be achieved as a result of HR strategy. National cultural, political and social factors continue to be highly influential in shaping the Nepalese business environment. Originality/value: Few papers have explored HR practice in Nepal. This paper contributes to the overall assessment of HR uptake globally and highlights emic features impacting on that uptake. © Emerald Group Publishing Limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Signal integration determines cell fate on the cellular level, affects cognitive processes and affective responses on the behavioural level, and is likely to be involved in psychoneurobiological processes underlying mood disorders. Interactions between stimuli may subjected to time effects. Time-dependencies of interactions between stimuli typically lead to complex cell responses and complex responses on the behavioural level. We show that both three-factor models and time series models can be used to uncover such time-dependencies. However, we argue that for short longitudinal data the three factor modelling approach is more suitable. In order to illustrate both approaches, we re-analysed previously published short longitudinal data sets. We found that in human embryonic kidney 293 cells cells the interaction effect in the regulation of extracellular signal-regulated kinase (ERK) 1 signalling activation by insulin and epidermal growth factor is subjected to a time effect and dramatically decays at peak values of ERK activation. In contrast, we found that the interaction effect induced by hypoxia and tumour necrosis factor-alpha for the transcriptional activity of the human cyclo-oxygenase-2 promoter in HEK293 cells is time invariant at least in the first 12-h time window after stimulation. Furthermore, we applied the three-factor model to previously reported animal studies. In these studies, memory storage was found to be subjected to an interaction effect of the beta-adrenoceptor agonist clenbuterol and certain antagonists acting on the alpha-1-adrenoceptor / glucocorticoid-receptor system. Our model-based analysis suggests that only if the antagonist drug is administer in a critical time window, then the interaction effect is relevant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The inclusion of high-level scripting functionality in state-of-the-art rendering APIs indicates a movement toward data-driven methodologies for structuring next generation rendering pipelines. A similar theme can be seen in the use of composition languages to deploy component software using selection and configuration of collaborating component implementations. In this paper we introduce the Fluid framework, which places particular emphasis on the use of high-level data manipulations in order to develop component based software that is flexible, extensible, and expressive. We introduce a data-driven, object oriented programming methodology to component based software development, and demonstrate how a rendering system with a similar focus on abstract manipulations can be incorporated, in order to develop a visualization application for geospatial data. In particular we describe a novel SAS script integration layer that provides access to vertex and fragment programs, producing a very controllable, responsive rendering system. The proposed system is very similar to developments speculatively planned for DirectX 10, but uses open standards and has cross platform applicability. © The Eurographics Association 2007.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Relational demographers and dissimilarity researchers contend that group members who are dissimilar (vs. similar) to their peers in terms of a given diversity attribute (e.g. demographics, attitudes, values or traits) feel less attached to their work group, experience less satisfying and more conflicted relationships with their colleagues, and consequently are less effective. However, qualitative reviews suggest empirical findings tend to be weak and inconsistent (Chattopadhyay, Tluchowska and George, 2004; Riordan, 2000; Tsui and Gutek, 1999), and that it remains unclear when, how and to what extent such differences (i.e. relational diversity) affect group members social integration (i.e. attachment with their work group, satisfaction and conflicted relationships with their peers) and effectiveness (Riordan, 2000). This absence of meta-analytically derived effect size estimates and the lack of an integrative theoretical framework leave practitioners with inconclusive advice regarding whether the effects elicited by relational diversity are practically relevant, and if so how these should be managed. The current research develops an integrative theoretical framework, which it tests by using meta-analysis techniques and adding two further empirical studies to the literature. The first study reports a meta-analytic integration of the results of 129 tests of the relationship between relational diversity with social integration and individual effectiveness. Using meta-analytic and structural equation modelling techniques, it shows different effects of surface- and deep-level relational diversity on social integration Specifically, low levels of interdependence accentuated the negative effects of surface-level relational diversity on social integration, while high levels of interdependence accentuated the negative effects of deep-level relational diversity on social integration. The second study builds on a social self-regulation framework (Abrams, 1994) and suggests that under high levels of interdependence relational diversity is not one but two things: visibility and separation. Using ethnicity as a prominent example it was proposed that separation has a negative effect on group members effectiveness leading for those high in visibility and low in separation to overall positive additive effects, while to overall negative additive effects for those low in visibility and high in separation. These propositions were sustained in a sample of 621 business students working in 135 ethnically diverse work groups in a business simulation course over a period of 24 weeks. The third study suggests visibility has a positive effect on group members self-monitoring, while separation has a negative effect. The study proposed that high levels of visibility and low levels of separation lead to overall positive additive effects on self-monitoring but overall negative additive effects for those low in visibility and high in separation. Results from four waves of data on 261 business students working in 69 ethnically diverse work groups in a business simulation course held over a period of 24 weeks support these propositions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To capture the genomic profiles for histone modification, chromatin immunoprecipitation (ChIP) is combined with next generation sequencing, which is called ChIP-seq. However, enriched regions generated from the ChIP-seq data are only evaluated on the limited knowledge acquired from manually examining the relevant biological literature. This paper proposes a novel framework, which integrates multiple knowledge sources such as biological literature, Gene Ontology, and microarray data. In order to precisely analyze ChIP-seq data for histone modification, knowledge integration is based on a unified probabilistic model. The model is employed to re-rank the enriched regions generated from peak finding algorithms. Through filtering the reranked enriched regions using some predefined threshold, more reliable and precise results could be generated. The combination of the multiple knowledge sources with the peaking finding algorithm produces a new paradigm for ChIP-seq data analysis. © (2012) Trans Tech Publications, Switzerland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The efficacy, quality, responsiveness, and value of healthcare services provided is increasingly attracting the attention and the questioning of governments, payers, patients, and healthcare providers. Investments on integration technologies and integration of supply chain processes, has been considered as a way towards removing inefficiencies in the sector. This chapter aims to initially provide an in depth analysis of the healthcare supply chain and to present core entities, processes, and flows. Moreover, the chapter explores the concept of integration in the context of the healthcare sector, and indentifies the integration drivers, as well as challenges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article provides a unique contribution to the debates about archived qualitative data by drawing on two uses of the same data - British Migrants in Spain: the Extent and Nature of Social Integration, 2003-2005 - by Jones (2009) and Oliver and O'Reilly (2010), both of which utilise Bourdieu's concepts analytically and produce broadly similar findings. We argue that whilst the insights and experiences of those researchers directly involved in data collection are important resources for developing contextual knowledge used in data analysis, other kinds of critical distance can also facilitate credible data use. We therefore challenge the assumption that the idiosyncratic relationship between context, reflexivity and interpretation limits the future use of data. Moreover, regardless of the complex genealogy of the data itself, given the number of contingencies shaping the qualitative research process and thus the potential for partial or inaccurate interpretation, contextual familiarity need not be privileged over other aspects of qualitative praxis such as sustained theoretical insight, sociological imagination and methodological rigour. © Sociological Research Online, 1996-2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Construction customers are persistently seeking to achieve sustainability and maximize value as sustainability has become a major consideration in the construction industry. In particular, it is essential to refurbish a whole house to achieve the sustainability agenda of 80% CO2 reduction by 2050 as the housing sector accounts for 28% of the total UK CO2 emission. However, whole house refurbishment seems to be challenging due to the highly fragmented nature of construction practice, which makes the integration of diverse information throughout the project lifecycle difficult. Consequently, Building Information Modeling (BIM) is becoming increasingly difficult to ignore in order to manage construction projects in a collaborative manner, although the current uptake of the housing sector is low at 25%. This research aims to investigate homeowners’ decision making factors for housing refurbishment projects and to provide a valuable dataset as an essential input to BIM for such projects. One-hundred and twelve homeowners and 39 construction professionals involved in UK housing refurbishment were surveyed. It was revealed that homeowners value initial cost more while construction professionals value thermal performance. The results supported that homeowners and professionals both considered the first priority to be roof refurbishment. This research revealed that BIM requires a proper BIM dataset and objects for housing refurbishment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The methods and software for integration of databases (DBs) on inorganic material and substance properties have been developed. The information systems integration is based on known approaches combination: EII (Enterprise Information Integration) and EAI (Enterprise Application Integration). The metabase - special database that stores data on integrated DBs contents is an integrated system kernel. Proposed methods have been applied for DBs integrated system creation in the field of inorganic chemistry and materials science. Important developed integrated system feature is ability to include DBs that have been created by means of different DBMS using essentially various computer platforms: Sun (DB "Diagram") and Intel (other DBs) and diverse operating systems: Sun Solaris (DB "Diagram") and Microsoft Windows Server (other DBs).