768 resultados para DDM Data Distribution Management testbed benchmark design implementation instance generator
Resumo:
In this paper, the authors introduce a novel mechanism for data management in a middleware for smart home control, where a relational database and semantic ontology storage are used at the same time in a Data Warehouse. An annotation system has been designed for instructing the storage format and location, registering new ontology concepts and most importantly, guaranteeing the Data Consistency between the two storage methods. For easing the data persistence process, the Data Access Object (DAO) pattern is applied and optimized to enhance the Data Consistency assurance. Finally, this novel mechanism provides an easy manner for the development of applications and their integration with BATMP. Finally, an application named "Parameter Monitoring Service" is given as an example for assessing the feasibility of the system.
Resumo:
Personal data about users (customers) is a key component for enterprises and large organizations. Its correct analysis and processing can produce relevant knowledge to achieve different business goals. For example, the monetisation of this data has become a valuable asset for many companies, such as Google, Facebook or Twitter, that obtain huge profits mainly from targeted advertising.
Resumo:
"UILU-ENG 83-1724."--Cover.
Resumo:
Thesis (M.S.)--University of Illinois at Urbana-Champaign.
Resumo:
Includes bibliographical references (p. 17-19).
Resumo:
Mode of access: Internet.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY WITH PRIOR ARRANGEMENT
Resumo:
Concurrent engineering and design for manufacture and assembly strategies have become pervasive in use in a wide array of industrial settings. These strategies have generally focused on product and process design issues based on capability concerns. The strategies have been historically justified using cost savings calculations focusing on easily quantifiable costs such as raw material savings or manufacturing or assembly operations no longer required. It is argued herein that neither the focus of the strategies nor the means of justification are adequate. Product and process design strategies should include both capability and capacity concerns and justification procedures should include the financial effects that the product and process changes would have on the entire company. The authors of this paper take this more holistic view of the problem and examine an innovative new design strategy using a comprehensive enterprise simulation tool. The results indicate that both the design strategy and the simulator show promise for further industrial use. © 2001 Elsevier Science B.V. All rights reserved.
Resumo:
This paper describes the work undertaken in the Scholarly Ontologies Project. The aim of the project has been to develop a computational approach to support scholarly sensemaking, through interpretation and argumentation, enabling researchers to make claims: to describe and debate their view of a document's key contributions and relationships to the literature. The project has investigated the technicalities and practicalities of capturing conceptual relations, within and between conventional documents in terms of abstract ontological structures. In this way, we have developed a new kind of index to distributed digital library systems. This paper reports a case study undertaken to test the sensemaking tools developed by the Scholarly Ontologies project. The tools used were ClaiMapper, which allows the user to sketch argument maps of individual papers and their connections, ClaiMaker, a server on which such models can be stored and saved, which provides interpretative services to assist the querying of argument maps across multiple papers and ClaimFinder, a novice interface to the search services in ClaiMaker.
Resumo:
A cikk célja, hogy elemző bemutatását adja az ellátási láncok működéséhez, különösen a disztribúciós tevékenység kiszervezéséhez kapcsolódó működési kockázatoknak. Az írás első része az irodalomkutatás eredményeit feldolgozva az ellátási láncok kockázati kitettségének növekedése mögött rejlő okokat törekszik feltárni, s röviden bemutatja a vállalati kockázatkezelés lehetséges lépéseit e téren. A cikk második gondolati egysége mélyinterjúk segítségével összefoglalja és rendszerezi a disztribúció kiszervezéséhez kapcsolódó kockázatokat, számba veszi a kapcsolódó kockázatkezelési lehetőségeket, s bemutatja a megkérdezett vállalatok által alkalmazott kockázat-megelőzési alternatívákat. ______ The aim of this paper is to introduce operational risks of supply chains, especially risks deriving from the outsourcing of distribution management. Based on literature review the first part of the paper talks about the potential reasons of increasing global supply chain risks, and the general business activities of risk assessment. Analyzing the results of semi-structured qualitative interviews, the second part summarizes the risks belonging to the outsourcing of distribution and introduces the potential risk assessment and avoidance opportunities and alternatives in practice.
Resumo:
Background: Biologists often need to assess whether unfamiliar datasets warrant the time investment required for more detailed exploration. Basing such assessments on brief descriptions provided by data publishers is unwieldy for large datasets that contain insights dependent on specific scientific questions. Alternatively, using complex software systems for a preliminary analysis may be deemed as too time consuming in itself, especially for unfamiliar data types and formats. This may lead to wasted analysis time and discarding of potentially useful data. Results: We present an exploration of design opportunities that the Google Maps interface offers to biomedical data visualization. In particular, we focus on synergies between visualization techniques and Google Maps that facilitate the development of biological visualizations which have both low-overhead and sufficient expressivity to support the exploration of data at multiple scales. The methods we explore rely on displaying pre-rendered visualizations of biological data in browsers, with sparse yet powerful interactions, by using the Google Maps API. We structure our discussion around five visualizations: a gene co-regulation visualization, a heatmap viewer, a genome browser, a protein interaction network, and a planar visualization of white matter in the brain. Feedback from collaborative work with domain experts suggests that our Google Maps visualizations offer multiple, scale-dependent perspectives and can be particularly helpful for unfamiliar datasets due to their accessibility. We also find that users, particularly those less experienced with computer use, are attracted by the familiarity of the Google Maps API. Our five implementations introduce design elements that can benefit visualization developers. Conclusions: We describe a low-overhead approach that lets biologists access readily analyzed views of unfamiliar scientific datasets. We rely on pre-computed visualizations prepared by data experts, accompanied by sparse and intuitive interactions, and distributed via the familiar Google Maps framework. Our contributions are an evaluation demonstrating the validity and opportunities of this approach, a set of design guidelines benefiting those wanting to create such visualizations, and five concrete example visualizations.
Resumo:
The purpose of this research was to design and implement a Series of Latin Shows to be featured at the Satine Restaurant located in The Diplomat Hotel in Hollywood, Florida. Three shows were created: "Electro Tango," "Bossa Nova Jazz," and "Piel Canela Night" to help generate interest for not only the Satine Restaurant but also for the surrounding area. The artistic concept included big bands, costumes, dancers and a DJ. A production book was created and included the most important aspects of the individual shows such as budgets, costumes, and ground plans, to assure the success of each event. Careful analysis was done for the demographic area and a marketing plan was designed and implemented. The research and practical application of similar shows in the industry determined that the production of these particular shows, although costly, have a qualifiable chance to succeed in this venue.
Resumo:
Building Information Modeling (BIM) is the use of virtual building information models to develop building design solutions and design documentation and to analyse construction processes. Recent advances in IT have enabled advanced knowledge management, which in turn facilitates sustainability and improves asset management in the civil construction industry. There are several important qualifiers and some disadvantages of the current suite of technologies. This paper outlines the benefits, enablers, and barriers associated with BIM and makes suggestions about how these issues may be addressed. The paper highlights the advantages of BIM, particularly the increased utility and speed, enhanced fault finding in all construction phases, and enhanced collaborations and visualisation of data. The paper additionally identifies a range of issues concerning the implementation of BIM as follows: IP, liability, risks, and contracts and the authenticity of users. Implementing BIM requires investment in new technology, skills training, and development of new ways of collaboration and Trade Practices concerns. However, when these challenges are overcome, BIM as a new information technology promises a new level of collaborative engineering knowledge management, designed to facilitate sustainability and asset management issues in design, construction, asset management practices, and eventually decommissioning for the civil engineering industry.
Resumo:
The design of a network is a solution to several engineering and science problems. Several network design problems are known to be NP-hard, and population-based metaheuristics like evolutionary algorithms (EAs) have been largely investigated for such problems. Such optimization methods simultaneously generate a large number of potential solutions to investigate the search space in breadth and, consequently, to avoid local optima. Obtaining a potential solution usually involves the construction and maintenance of several spanning trees, or more generally, spanning forests. To efficiently explore the search space, special data structures have been developed to provide operations that manipulate a set of spanning trees (population). For a tree with n nodes, the most efficient data structures available in the literature require time O(n) to generate a new spanning tree that modifies an existing one and to store the new solution. We propose a new data structure, called node-depth-degree representation (NDDR), and we demonstrate that using this encoding, generating a new spanning forest requires average time O(root n). Experiments with an EA based on NDDR applied to large-scale instances of the degree-constrained minimum spanning tree problem have shown that the implementation adds small constants and lower order terms to the theoretical bound.