905 resultados para P2P and networked data management
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2011
Resumo:
The manipulation and handling of an ever increasing volume of data by current data-intensive applications require novel techniques for e?cient data management. Despite recent advances in every aspect of data management (storage, access, querying, analysis, mining), future applications are expected to scale to even higher degrees, not only in terms of volumes of data handled but also in terms of users and resources, often making use of multiple, pre-existing autonomous, distributed or heterogeneous resources.
Resumo:
Tämän diplomityön päämääränä oli kuvata tilaus-toimitusprosessin eri toimintojen työnkulku, kun tuotetiedonhallintajärjestelmä on osa työympäristöä. Työn teoreettisessa osassa tarkasteltiin liiketoimintaprosessien uudistamista ja prosessien määrittämistä sekä esiteltiin tuotetiedonhallinnan (PDM) keskeiset osa-alueet. Kohdeyrityksen tausta ja strategiat esiteltiin, minkä jälkeen muutoksia arvioitiin suhteessa teoriaosuuden tuloksiin. Nykyisten toimintatapojen määrittämistä varten haastateltiin henkilöitä jokaisesta tilaus-toimitusprosessin vaiheesta tuotantoyksikön sisällä. Lopuksi kuvattiin yrityksen tuotetiedonhallintaperiaatteet ja määritettiin työnkulku prosessin eri vaiheissa. Samalla kuin uusi tuotetiedonhallintajärjestelmä otetaan käyttöön, on yrityksessä omaksuttava tuotetiedonhallinnan ajatusmalli. Tuoterakenteen hallinta jakautuu nyt eri toimintojen kesken, jolloin suunnittelun rakenne, tuotannon rakenne ja huoltorakenne ovat eri ihmisten vastuulla. Näiden eri rakenteiden konfigurointi tilaus-toimitus prosessin aikana määrää missä järjestyksessä toiminnot on suoritettava eri järjestelmien välillä. Monikansallinen suunnitteluorganisaatio on myös otettava huomioon tilauksenkulun aikana. Tuotetiedonhallintajärjestelmää käytetään yhdessä tuttujen suunnitteluohjelmien sekä toiminnanohjausjärjestelmän (ERP) kanssa. Työnkulkukaaviossa määritellään koko yritystä koskeva malli siitä, miten ja missä järjestyksessä tehtävät on suoritettava eri järjestelmissä tilaus-toimitus prosessin aikana. Tässä työssä tutkittiin tuotteen määrittelyn ja suunnittelutiedon hallinnan kannalta oleellisimmat tilaus-toimitusprosessiin kuuluvat toiminnot; myynti, myynnin tuki, tuotannon ohjaus, sovellussuunnittelu ja dokumentointi. Tulevaisuudessa on suositeltavaa pohtia tuotetiedonhallintajärjestelmän käyttöönottoa myös tuotannossa ja ostoissa. Tilaus-toimitusprosessiin liittyvät kehitysmahdollisuudet kannattaisi seuraavaksi kohdistaa tilauksen määrittelyvaiheeseen myyjä-asiakas rajapinnassa, jossa tehdyt virheet kertautuvat jokaisessa prosessin vaiheessa.
Resumo:
There is remarkable agreement in expectations today for vastly improved ocean data management a decade from now -- capabilities that will help to bring significant benefits to ocean research and to society. Advancing data management to such a degree, however, will require cultural and policy changes that are slow to effect. The technological foundations upon which data management systems are built are certain to continue advancing rapidly in parallel. These considerations argue for adopting attitudes of pragmatism and realism when planning data management strategies. In this paper we adopt those attitudes as we outline opportunities for progress in ocean data management. We begin with a synopsis of expectations for integrated ocean data management a decade from now. We discuss factors that should be considered by those evaluating candidate “standards”. We highlight challenges and opportunities in a number of technical areas, including “Web 2.0” applications, data modeling, data discovery and metadata, real-time operational data, archival of data, biological data management and satellite data management. We discuss the importance of investments in the development of software toolkits to accelerate progress. We conclude the paper by recommending a few specific, short term targets for implementation, that we believe to be both significant and achievable, and calling for action by community leadership to effect these advancements.
Resumo:
In geophysics and seismology, raw data need to be processed to generate useful information that can be turned into knowledge by researchers. The number of sensors that are acquiring raw data is increasing rapidly. Without good data management systems, more time can be spent in querying and preparing datasets for analyses than in acquiring raw data. Also, a lot of good quality data acquired at great effort can be lost forever if they are not correctly stored. Local and international cooperation will probably be reduced, and a lot of data will never become scientific knowledge. For this reason, the Seismological Laboratory of the Institute of Astronomy, Geophysics and Atmospheric Sciences at the University of Sao Paulo (IAG-USP) has concentrated fully on its data management system. This report describes the efforts of the IAG-USP to set up a seismology data management system to facilitate local and international cooperation.
Resumo:
Network control systems (NCSs) are spatially distributed systems in which the communication between sensors, actuators and controllers occurs through a shared band-limited digital communication network. However, the use of a shared communication network, in contrast to using several dedicated independent connections, introduces new challenges which are even more acute in large scale and dense networked control systems. In this paper we investigate a recently introduced technique of gathering information from a dense sensor network to be used in networked control applications. Obtaining efficiently an approximate interpolation of the sensed data is exploited as offering a good tradeoff between accuracy in the measurement of the input signals and the delay to the actuation. These are important aspects to take into account for the quality of control. We introduce a variation to the state-of-the-art algorithms which we prove to perform relatively better because it takes into account the changes over time of the input signal within the process of obtaining an approximate interpolation.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
1975
Resumo:
Quantitative information from magnetic resonance imaging (MRI) may substantiate clinical findings and provide additional insight into the mechanism of clinical interventions in therapeutic stroke trials. The PERFORM study is exploring the efficacy of terutroban versus aspirin for secondary prevention in patients with a history of ischemic stroke. We report on the design of an exploratory longitudinal MRI follow-up study that was performed in a subgroup of the PERFORM trial. An international multi-centre longitudinal follow-up MRI study was designed for different MR systems employing safety and efficacy readouts: new T2 lesions, new DWI lesions, whole brain volume change, hippocampal volume change, changes in tissue microstructure as depicted by mean diffusivity and fractional anisotropy, vessel patency on MR angiography, and the presence of and development of new microbleeds. A total of 1,056 patients (men and women ≥ 55 years) were included. The data analysis included 3D reformation, image registration of different contrasts, tissue segmentation, and automated lesion detection. This large international multi-centre study demonstrates how new MRI readouts can be used to provide key information on the evolution of cerebral tissue lesions and within the macrovasculature after atherothrombotic stroke in a large sample of patients.
Resumo:
After decades of mergers and acquisitions and successive technology trends such as CRM, ERP and DW, the data in enterprise systems is scattered and inconsistent. Global organizations face the challenge of addressing local uses of shared business entities, such as customer and material, and at the same time have a consistent, unique, and consolidate view of financial indicators. In addition, current enterprise systems do not accommodate the pace of organizational changes and immense efforts are required to maintain data. When it comes to systems integration, ERPs are considered “closed” and expensive. Data structures are complex and the “out-of-the-box” integration options offered are not based on industry standards. Therefore expensive and time-consuming projects are undertaken in order to have required data flowing according to business processes needs. Master Data Management (MDM) emerges as one discipline focused on ensuring long-term data consistency. Presented as a technology-enabled business discipline, it emphasizes business process and governance to model and maintain the data related to key business entities. There are immense technical and organizational challenges to accomplish the “single version of the truth” MDM mantra. Adding one central repository of master data might prove unfeasible in a few scenarios, thus an incremental approach is recommended, starting from areas most critically affected by data issues. This research aims at understanding the current literature on MDM and contrasting it with views from professionals. The data collected from interviews revealed details on the complexities of data structures and data management practices in global organizations, reinforcing the call for more in-depth research on organizational aspects of MDM. The most difficult piece of master data to manage is the “local” part, the attributes related to the sourcing and storing of materials in one particular warehouse in The Netherlands or a complex set of pricing rules for a subsidiary of a customer in Brazil. From a practical perspective, this research evaluates one MDM solution under development at a Finnish IT solution-provider. By means of applying an existing assessment method, the research attempts at providing the company with one possible tool to evaluate its product from a vendor-agnostics perspective.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Product Data Management (PDM) systems have been utilized within companies since the 1980s. Mainly the PDM systems have been used by large companies. This thesis presents the premise that small and medium-sized companies can also benefit from utilizing the Product Data Management systems. Furthermore, the starting point for the thesis is that the existing PDM systems are either too expensive or do not properly respond to the requirements SMEs have. The aim of this study is to investigate what kinds of requirements and special features SMEs, operating in Finnish manufacturing industry, have towards Product Data Management. Additionally, the target is to create a conceptual model that could fulfill the specified requirements. The research has been carried out as a qualitative case study, in which the research data was collected from ten Finnish companies operating in manufacturing industry. The research data is formed by interviewing key personnel from the case companies. After this, the data formed from the interviews has been processed to comprise a generic set of information system requirements and the information system concept supporting it. The commercialization of the concept is studied in the thesis from the perspective of system development. The aim was to create a conceptual model, which would be economically feasible for both, a company utilizing the system and for a company developing it. For this reason, the thesis has sought ways to scale the system development effort for multiple simultaneous cases. The main methods found were to utilize platform-based thinking and a way to generalize the system requirements, or in other words abstracting the requirements of an information system. The results of the research highlight the special features Finnish manufacturing SMEs have towards PDM. The most significant of the special features is the usage of project model to manage the order-to-delivery –process. This differs significantly from the traditional concepts of Product Data Management presented in the literature. Furthermore, as a research result, this thesis presents a conceptual model of a PDM system, which would be viable for the case companies interviewed during the research. As a by-product, this research presents a synthesized model, found from the literature, to abstract information system requirements. In addition to this, the strategic importance and categorization of information systems within companies has been discussed from the perspective of information system customizations.
Resumo:
Speaker: Dr Kieron O'Hara Organiser: Time: 04/02/2015 11:00-11:45 Location: B32/3077 Abstract In order to reap the potential societal benefits of big and broad data, it is essential to share and link personal data. However, privacy and data protection considerations mean that, to be shared, personal data must be anonymised, so that the data subject cannot be identified from the data. Anonymisation is therefore a vital tool for data sharing, but deanonymisation, or reidentification, is always possible given sufficient auxiliary information (and as the amount of data grows, both in terms of creation, and in terms of availability in the public domain, the probability of finding such auxiliary information grows). This creates issues for the management of anonymisation, which are exacerbated not only by uncertainties about the future, but also by misunderstandings about the process(es) of anonymisation. This talk discusses these issues in relation to privacy, risk management and security, reports on recent theoretical tools created by the UKAN network of statistics professionals (on which the author is one of the leads), and asks how long anonymisation can remain a useful tool, and what might replace it.