15 resultados para product data management
em CentAUR: Central Archive University of Reading - UK
Resumo:
The iRODS system, created by the San Diego Supercomputing Centre, is a rule oriented data management system that allows the user to create sets of rules to define how the data is to be managed. Each rule corresponds to a particular action or operation (such as checksumming a file) and the system is flexible enough to allow the user to create new rules for new types of operations. The iRODS system can interface to any storage system (provided an iRODS driver is built for that system) and relies on its’ metadata catalogue to provide a virtual file-system that can handle files of any size and type. However, some storage systems (such as tape systems) do not handle small files efficiently and prefer small files to be packaged up (or “bundled”) into larger units. We have developed a system that can bundle small data files of any type into larger units - mounted collections. The system can create collection families and contains its’ own extensible metadata, including metadata on which family the collection belongs to. The mounted collection system can work standalone and is being incorporated into the iRODS system to enhance the systems flexibility to handle small files. In this paper we describe the motivation for creating a mounted collection system, its’ architecture and how it has been incorporated into the iRODS system. We describe different technologies used to create the mounted collection system and provide some performance numbers.
Resumo:
There is remarkable agreement in expectations today for vastly improved ocean data management a decade from now -- capabilities that will help to bring significant benefits to ocean research and to society. Advancing data management to such a degree, however, will require cultural and policy changes that are slow to effect. The technological foundations upon which data management systems are built are certain to continue advancing rapidly in parallel. These considerations argue for adopting attitudes of pragmatism and realism when planning data management strategies. In this paper we adopt those attitudes as we outline opportunities for progress in ocean data management. We begin with a synopsis of expectations for integrated ocean data management a decade from now. We discuss factors that should be considered by those evaluating candidate “standards”. We highlight challenges and opportunities in a number of technical areas, including “Web 2.0” applications, data modeling, data discovery and metadata, real-time operational data, archival of data, biological data management and satellite data management. We discuss the importance of investments in the development of software toolkits to accelerate progress. We conclude the paper by recommending a few specific, short term targets for implementation, that we believe to be both significant and achievable, and calling for action by community leadership to effect these advancements.
Resumo:
Climate-G is a large scale distributed testbed devoted to climate change research. It is an unfunded effort started in 2008 and involving a wide community both in Europe and US. The testbed is an interdisciplinary effort involving partners from several institutions and joining expertise in the field of climate change and computational science. Its main goal is to allow scientists carrying out geographical and cross-institutional data discovery, access, analysis, visualization and sharing of climate data. It represents an attempt to address, in a real environment, challenging data and metadata management issues. This paper presents a complete overview about the Climate-G testbed highlighting the most important results that have been achieved since the beginning of this project.
Resumo:
Purpose: To investigate the relationship between research data management (RDM) and data sharing in the formulation of RDM policies and development of practices in higher education institutions (HEIs). Design/methodology/approach: Two strands of work were undertaken sequentially: firstly, content analysis of 37 RDM policies from UK HEIs; secondly, two detailed case studies of institutions with different approaches to RDM based on semi-structured interviews with staff involved in the development of RDM policy and services. The data are interpreted using insights from Actor Network Theory. Findings: RDM policy formation and service development has created a complex set of networks within and beyond institutions involving different professional groups with widely varying priorities shaping activities. Data sharing is considered an important activity in the policies and services of HEIs studied, but its prominence can in most cases be attributed to the positions adopted by large research funders. Research limitations/implications: The case studies, as research based on qualitative data, cannot be assumed to be universally applicable but do illustrate a variety of issues and challenges experienced more generally, particularly in the UK. Practical implications: The research may help to inform development of policy and practice in RDM in HEIs and funder organisations. Originality/value: This paper makes an early contribution to the RDM literature on the specific topic of the relationship between RDM policy and services, and openness – a topic which to date has received limited attention.
Resumo:
Mainframes, corporate and central servers are becoming information servers. The requirement for more powerful information servers is the best opportunity to exploit the potential of parallelism. ICL recognized the opportunity of the 'knowledge spectrum' namely to convert raw data into information and then into high grade knowledge. Parallel Processing and Data Management Its response to this and to the underlying search problems was to introduce the CAFS retrieval engine. The CAFS product demonstrates that it is possible to move functionality within an established architecture, introduce a different technology mix and exploit parallelism to achieve radically new levels of performance. CAFS also demonstrates the benefit of achieving this transparently behind existing interfaces. ICL is now working with Bull and Siemens to develop the information servers of the future by exploiting new technologies as available. The objective of the joint Esprit II European Declarative System project is to develop a smoothly scalable, highly parallel computer system, EDS. EDS will in the main be an SQL server and an information server. It will support the many data-intensive applications which the companies foresee; it will also support application-intensive and logic-intensive systems.
Resumo:
This paper describes a case study of an electronic data management system developed in-house by the Facilities Management Directorate (FMD) of an educational institution in the UK. The FMD Maintenance and Business Services department is responsible for the maintenance of the built-estate owned by the university. The department needs to have a clear definition of the type of work undertaken and the administration that enables any maintenance work to be carried out. These include the management of resources, budget, cash flow and workflow of reactive, preventative and planned maintenance of the campus. In order to be more efficient in supporting the business process, the FMD had decided to move from a paper-based information system to an electronic system, WREN, to support the business process of the FMD. Some of the main advantages of WREN are that it is tailor-made to fit the purpose of the users; it is cost effective when it comes to modifications on the system; and the database can also be used as a knowledge management tool. There is a trade-off; as WREN is tailored to the specific requirements of the FMD, it may not be easy to implement within a different institution without extensive modifications. However, WREN is successful in not only allowing the FMD to carry out the tasks of maintaining and looking after the built-estate of the university, but also has achieved its aim to minimise costs and maximise efficiency.
Resumo:
The P-found protein folding and unfolding simulation repository is designed to allow scientists to perform data mining and other analyses across large, distributed simulation data sets. There are two storage components in P-found: a primary repository of simulation data that is used to populate the second component, and a data warehouse that contains important molecular properties. These properties may be used for data mining studies. Here we demonstrate how grid technologies can support multiple, distributed P-found installations. In particular, we look at two aspects: firstly, how grid data management technologies can be used to access the distributed data warehouses; and secondly, how the grid can be used to transfer analysis programs to the primary repositories — this is an important and challenging aspect of P-found, due to the large data volumes involved and the desire of scientists to maintain control of their own data. The grid technologies we are developing with the P-found system will allow new large data sets of protein folding simulations to be accessed and analysed in novel ways, with significant potential for enabling scientific discovery.
Resumo:
The Environmental Data Abstraction Library provides a modular data management library for bringing new and diverse datatypes together for visualisation within numerous software packages, including the ncWMS viewing service, which already has very wide international uptake. The structure of EDAL is presented along with examples of its use to compare satellite, model and in situ data types within the same visualisation framework. We emphasize the value of this capability for cross calibration of datasets and evaluation of model products against observations, including preparation for data assimilation.
Resumo:
Purpose When consumers buy online, they are often confronted with consumer reviews. A negative consumer review on an online shopping website may keep consumers from buying the product. Therefore, negative online consumer reviews are a serious problem for brands. This paper aims to investigate the effects of different response options to a negative consumer review. Design/methodology/approach In an online experiment of 446 participants different response options towards a negative consumer review on an online shopping website are examined. The experimental data is analysed with simple linear regression models using product purchase intentions as the outcome variable. Findings The results indicate that a positive customer review counteracts a negative consumer review more effectively than a positive brand response, whereas brand strength moderates this relationship. Including a reference to an independent, trusted source in a brand or a customer response is only a limited strategy for increasing the effectiveness of a response. Research limitations/implications Additional research in other product categories and with other subjects than students is suggested to validate the findings. In future research, multiple degrees of the phrasing’s strength of the reference could be used. Practical implications Assuming high quality products, brands should encourage their customers to write reviews. Strong brands can also reassure consumers by responding whereas weak brands cannot. Originality/value This research contributes to the online consumer reviews literature with new insights about the role of brand strength and referencing to an independent, trusted source.
Resumo:
Complex products such as manufacturing equipment have always needed maintenance and repair services. Increasingly, leading manufacturers are integrating products and services to generate increased revenues and achieve customer satisfaction. Designing integrated products and services requires a different approach to new product development and a clear understanding of how customers perceive the value they obtain from actual usage of products and services—so-called value-in-use. However, there is a lack of research on integrated products and services and how they impact customer satisfaction. An exploratory study was undertaken to understand customers’ views on integrated products and services and the value-in-use derived from such offerings. As value-in-use and its impacts are complicated concepts, a technique from psychology—Repertory Grid Technique—was used to gather data in 33 interviews. The interviews allowed a deep understanding of customer views on integrated products and services to be obtained, and a systematic analysis identified the key attributes of value-in-use. In order to probe further, the data were then analyzed using Honey’s procedure, which identified the impact of the attributes of value-in-use on customer satisfaction. Two key attributes—relational dynamic and access—were found to have the most influence on customer satisfaction. This paper contributes to the innovation field by identifying customer needs for integrated products and services and how these impact customer satisfaction. These are key points and need to be fully considered by managers during new product and service development. Similarly, the paper identifies a number of important areas for further research.
Resumo:
ISO19156 Observations and Measurements (O&M) provides a standardised framework for organising information about the collection of information about the environment. Here we describe the implementation of a specialisation of O&M for environmental data, the Metadata Objects for Linking Environmental Sciences (MOLES3). MOLES3 provides support for organising information about data, and for user navigation around data holdings. The implementation described here, “CEDA-MOLES”, also supports data management functions for the Centre for Environmental Data Archival, CEDA. The previous iteration of MOLES (MOLES2) saw active use over five years, being replaced by CEDA-MOLES in late 2014. During that period important lessons were learnt both about the information needed, as well as how to design and maintain the necessary information systems. In this paper we review the problems encountered in MOLES2; how and why CEDA-MOLES was developed and engineered; the migration of information holdings from MOLES2 to CEDA-MOLES; and, finally, provide an early assessment of MOLES3 (as implemented in CEDA-MOLES) and its limitations. Key drivers for the MOLES3 development included the necessity for improved data provenance, for further structured information to support ISO19115 discovery metadata export (for EU INSPIRE compliance), and to provide appropriate fixed landing pages for Digital Object Identifiers (DOIs) in the presence of evolving datasets. Key lessons learned included the importance of minimising information structure in free text fields, and the necessity to support as much agility in the information infrastructure as possible without compromising on maintainability both by those using the systems internally and externally (e.g. citing in to the information infrastructure), and those responsible for the systems themselves. The migration itself needed to ensure continuity of service and traceability of archived assets.