946 resultados para Research Data Management
Resumo:
This work supported drafting project management guidance for the Iowa Department of Transportation (DOT). The goal is to incorporate a greater focus on project management in their project development process. A technical advisory committee (TAC) was assembled to accomplish this effort. The TAC took into consideration the current status of project management with the Iowa DOT, their experience during the demonstration workshop held in Iowa as part of the implementation assistance they received, the project management peer exchange hosted by the Iowa DOT, and additional examples of project management that were presented. With this basis, the TAC participated in a number of discussions to develop draft guidance for the foundation of a Project Management Office (PMO) within the Iowa DOT. The final report describes the process that was used in establishing this guidance. The report details the decisions and decision process that the TAC employed in this endeavor and provides additional thoughts and insight into the draft guidance. Appendix A includes the draft guidance in the form of PMO function details and detailed lists of project management roles and responsibilities. Appendix B includes a starter list of project management resources for the PMO.
Resumo:
Management of customer co-development means involving customers in the development of new products and services, and coordinating the process. In business-tobusiness markets, customer co-development enables the development of innovations that better match customer needs and strengthens customer relationships. However, close collaboration with customers can hamper the innovativeness of new products and lead to overly customized solutions. Therefore, the management of co-development is crucial to its success. Yet the existing research on management of co-development has mainly focused on selecting the right collaboration partners, and the field lacks understanding on how to manage the tensions inherent in customer co-development. The purpose of this thesis is to increase understanding on the management of the codevelopment. The thesis is divided into two parts. The first comprises the literature review and conclusions for the whole study, and the second presents four publications. From the methodological perspective, the research papers follow exploratory qualitative research design. The empirical data comprise interviews with 60 persons, representing 25 different organizations, and a group of 11 end users. The study conceptualizes management of customer co-development in three dimensions 1) relational co-development processes, 2) co-development challenges and paradoxes, and 3) internal customer involvement processes. The findings contribute to the customersupplier relationship, innovation, and marketing management literatures by providing a framework on supplier-customer co-development, addressing co-development paradoxes and their management processes, and suggesting practices for customer involvement. For practitioners, the findings provide tools to manage the challenges related to codevelopment with customers.
Resumo:
Maintaining accessibility to and understanding of digital information over time is a complex challenge that often requires contributions and interventions from a variety of individuals and organizations. The processes of preservation planning and evaluation are fundamentally implicit and share similar complexity. Both demand comprehensive knowledge and understanding of every aspect of to-be-preserved content and the contexts within which preservation is undertaken. Consequently, means are required for the identification, documentation and association of those properties of data, representation and management mechanisms that in combination lend value, facilitate interaction and influence the preservation process. These properties may be almost limitless in terms of diversity, but are integral to the establishment of classes of risk exposure, and the planning and deployment of appropriate preservation strategies. We explore several research objectives within the course of this thesis. Our main objective is the conception of an ontology for risk management of digital collections. Incorporated within this are our aims to survey the contexts within which preservation has been undertaken successfully, the development of an appropriate methodology for risk management, the evaluation of existing preservation evaluation approaches and metrics, the structuring of best practice knowledge and lastly the demonstration of a range of tools that utilise our findings. We describe a mixed methodology that uses interview and survey, extensive content analysis, practical case study and iterative software and ontology development. We build on a robust foundation, the development of the Digital Repository Audit Method Based on Risk Assessment. We summarise the extent of the challenge facing the digital preservation community (and by extension users and creators of digital materials from many disciplines and operational contexts) and present the case for a comprehensive and extensible knowledge base of best practice. These challenges are manifested in the scale of data growth, the increasing complexity and the increasing onus on communities with no formal training to offer assurances of data management and sustainability. These collectively imply a challenge that demands an intuitive and adaptable means of evaluating digital preservation efforts. The need for individuals and organisations to validate the legitimacy of their own efforts is particularly prioritised. We introduce our approach, based on risk management. Risk is an expression of the likelihood of a negative outcome, and an expression of the impact of such an occurrence. We describe how risk management may be considered synonymous with preservation activity, a persistent effort to negate the dangers posed to information availability, usability and sustainability. Risk can be characterised according to associated goals, activities, responsibilities and policies in terms of both their manifestation and mitigation. They have the capacity to be deconstructed into their atomic units and responsibility for their resolution delegated appropriately. We continue to describe how the manifestation of risks typically spans an entire organisational environment, and as the focus of our analysis risk safeguards against omissions that may occur when pursuing functional, departmental or role-based assessment. We discuss the importance of relating risk-factors, through the risks themselves or associated system elements. To do so will yield the preservation best-practice knowledge base that is conspicuously lacking within the international digital preservation community. We present as research outcomes an encapsulation of preservation practice (and explicitly defined best practice) as a series of case studies, in turn distilled into atomic, related information elements. We conduct our analyses in the formal evaluation of memory institutions in the UK, US and continental Europe. Furthermore we showcase a series of applications that use the fruits of this research as their intellectual foundation. Finally we document our results in a range of technical reports and conference and journal articles. We present evidence of preservation approaches and infrastructures from a series of case studies conducted in a range of international preservation environments. We then aggregate this into a linked data structure entitled PORRO, an ontology relating preservation repository, object and risk characteristics, intended to support preservation decision making and evaluation. The methodology leading to this ontology is outlined, and lessons are exposed by revisiting legacy studies and exposing the resource and associated applications to evaluation by the digital preservation community.
Resumo:
significant amount of Expendable Bathythermograph (XBT) data has been collected in the Mediterranean Sea since 1999 in the framework of operational oceanography activities. The management and storage of such a volume of data poses significant challenges and opportunities. The SeaDataNet project, a pan-European infrastructure for marine data diffusion, provides a convenient way to avoid dispersion of these temperature vertical profiles and to facilitate access to a wider public. The XBT data flow, along with the recent improvements in the quality check procedures and the consistence of the available historical data set are described. The main features of SeaDataNet services and the advantage of using this system for long-term data archiving are presented. Finally, focus on the Ligurian Sea is included in order to provide an example of the kind of information and final products devoted to different users can be easily derived from the SeaDataNet web portal.
Resumo:
This document is summarizing a major part of the work performed by the FP7-JERICO consortium, including 27 partner institutions, during 4 years (2011-2015). Its objective is to propose a strategy for the European coastal observation and monitoring. To do so we give an overview of the main achievements of the FP7-JERICO project. From this overview, gaps are analysed to draw some recommendations for the future. Overview, gaps and recommendation are addressed at both Hardware and Software levels of the JERICO Research Infrastructure. The main part of the document is built upon this analysis to outcome a general strategy for the future, giving priorities to be targeted and some possible funding mechanisms, but also upon discussions held in dedicated JERICO strategy workshops. This document was initiated in 2014 by the coordination team but considering the fact that an overview of the entire project and its achievement were needed to feed this strategy deliverable it couldn’t ended before the end of FP7-JERICO, April 2015. The preparation of the JERICO-NEXT proposal in summer 2014 to answer an H2020 call for proposals pushed the consortium ahead, fed deep thoughts about this strategy but the intention was to not propose a strategy only bounded by the JERICO-NEXT answer. Authors are conscious that writing JERICO-NEXT is even drawing a bias in the thoughts and they tried to be opened. Nevertheless, comments are always welcome to go farther ahead. Structure of the document The Chapter 3 introduces the need of sustained coastal observatories, from different point of view including a short description of the FP7-JERICO project. In Chapter 4, an analysis of the JERICO coastal observatory Hardware (platforms and sensors) in terms of Status at the end of JERICO, identified gaps and recommendations for further development is provided region by region. The main challenges that remain to be overcome is also summarized. Chapter 5 is dedicated the JERICO infrastructure Software (calibration, operation, quality assessment, data management) and the progress made through JERICO on harmonization of procedures and definition of best practices. Chapter 6 provides elements of a strategy towards sustainable and integrated coastal observations for Europe, drawing a roadmap for cost-effective scientific-based consolidation of the present infrastructure while maximizing the potential arising from JERICO in terms of innovation, wealth-creation, and business development. After reading the chapter 3, for who doesn’t know JERICO, any chapter can be read independently. More details are available in the JERICO final reports and its intermediate reports; all are available on the JERICO web site (www.jerico-FP7.eu) as well as any deliverable. Each chapter will list referring JERICO documents. A small bibliographic list is available at the end of this deliverable.
Resumo:
The overarching aim of this thesis was to develop an intervention to support patient-centred prescribing in the context of multimorbidity in primary care. Methods A range of research methods were used to address different components of the Medical Research Council, UK (MRC) guidance on the development and evaluation of complex interventions in health care. The existing evidence on GPs’ perceptions of the management of multimorbidity was systematically reviewed. In qualitative interviews, chart-stimulated recall was used to explore the challenges experienced by GPs when prescribing for multimorbid patients. In a cross-sectional study, the psychosocial issues that complicate the management of multimorbidity were examined. To develop the complex intervention, the Behaviour Change Wheel (BCW) was used to integrate behavioural theory with the findings of these three studies. A feasibility study of the new intervention was then conducted with GPs. Results The systematic review revealed four domains of clinical practice where GPs experienced difficulties in multimorbidity. The qualitative interview study showed that GPs responded to these difficulties by ‘satisficing’. In multimorbid patients perceived as stable, GPs preferred to ‘maintain the status quo’ rather than actively change medications. In the cross-sectional study, the significant association between multimorbidity and negative psychosocial factors was shown. These findings informed the development of the ‘Multimorbidity Collaborative Medication Review and Decision-making’ (MY COMRADE) intervention. The intervention involves peer support: two GPs review the medications prescribed to a complex multimorbid patient together. In the feasibility study, GPs reported that the intervention was appropriate for the context of general practice; was widely applicable to their patients with multimorbidity; and recommendations for optimising medications arose from all collaborative reviews. Conclusion Applying theory to empirical data has led to an intervention that is implementable in clinical practice, and has the potential to positively change GPs’ behaviour in the management of medications for patients with multimorbidity.
Resumo:
Although the debate of what data science is has a long history and has not reached a complete consensus yet, Data Science can be summarized as the process of learning from data. Guided by the above vision, this thesis presents two independent data science projects developed in the scope of multidisciplinary applied research. The first part analyzes fluorescence microscopy images typically produced in life science experiments, where the objective is to count how many marked neuronal cells are present in each image. Aiming to automate the task for supporting research in the area, we propose a neural network architecture tuned specifically for this use case, cell ResUnet (c-ResUnet), and discuss the impact of alternative training strategies in overcoming particular challenges of our data. The approach provides good results in terms of both detection and counting, showing performance comparable to the interpretation of human operators. As a meaningful addition, we release the pre-trained model and the Fluorescent Neuronal Cells dataset collecting pixel-level annotations of where neuronal cells are located. In this way, we hope to help future research in the area and foster innovative methodologies for tackling similar problems. The second part deals with the problem of distributed data management in the context of LHC experiments, with a focus on supporting ATLAS operations concerning data transfer failures. In particular, we analyze error messages produced by failed transfers and propose a Machine Learning pipeline that leverages the word2vec language model and K-means clustering. This provides groups of similar errors that are presented to human operators as suggestions of potential issues to investigate. The approach is demonstrated on one full day of data, showing promising ability in understanding the message content and providing meaningful groupings, in line with previously reported incidents by human operators.
Resumo:
The dissertation addresses the still not solved challenges concerned with the source-based digital 3D reconstruction, visualisation and documentation in the domain of archaeology, art and architecture history. The emerging BIM methodology and the exchange data format IFC are changing the way of collaboration, visualisation and documentation in the planning, construction and facility management process. The introduction and development of the Semantic Web (Web 3.0), spreading the idea of structured, formalised and linked data, offers semantically enriched human- and machine-readable data. In contrast to civil engineering and cultural heritage, academic object-oriented disciplines, like archaeology, art and architecture history, are acting as outside spectators. Since the 1990s, it has been argued that a 3D model is not likely to be considered a scientific reconstruction unless it is grounded on accurate documentation and visualisation. However, these standards are still missing and the validation of the outcomes is not fulfilled. Meanwhile, the digital research data remain ephemeral and continue to fill the growing digital cemeteries. This study focuses, therefore, on the evaluation of the source-based digital 3D reconstructions and, especially, on uncertainty assessment in the case of hypothetical reconstructions of destroyed or never built artefacts according to scientific principles, making the models shareable and reusable by a potentially wide audience. The work initially focuses on terminology and on the definition of a workflow especially related to the classification and visualisation of uncertainty. The workflow is then applied to specific cases of 3D models uploaded to the DFG repository of the AI Mainz. In this way, the available methods of documenting, visualising and communicating uncertainty are analysed. In the end, this process will lead to a validation or a correction of the workflow and the initial assumptions, but also (dealing with different hypotheses) to a better definition of the levels of uncertainty.
Resumo:
The analysis of interviews with open-ended questions is a common practice amongst researchers in the field of Management. The difficulty therein is to convert the linguistic data into categories or quantitative values for subsequent statistical treatment. Proposals made to this end generally entail counting lexical occurrences which, since they are founded on previously established meanings, fail to include semantic associations made by interviews. This article aims to present an analysis tool comprising a set of techniques apt to generate linguistic units that can be statistically described, compared, modeled and inferred: the Quantitative Propositional Analysis (QPA), Its main difference from other such methods lies in the choice of a proposition - and not the lexical unit - as analysis unit. We present the application of this method through a study about the international expansion of retail firms.
Resumo:
In this paper, we describe the Vannotea system - an application designed to enable collaborating groups to discuss and annotate collections of high quality images, video, audio or 3D objects. The system has been designed specifically to capture and share scholarly discourse and annotations about multimedia research data by teams of trusted colleagues within a research or academic environment. As such, it provides: authenticated access to a web browser search interface for discovering and retrieving media objects; a media replay window that can incorporate a variety of embedded plug-ins to render different scientific media formats; an annotation authoring, editing, searching and browsing tool; and session logging and replay capabilities. Annotations are personal remarks, interpretations, questions or references that can be attached to whole files, segments or regions. Vannotea enables annotations to be attached either synchronously (using jabber message passing and audio/video conferencing) or asynchronously and stand-alone. The annotations are stored on an Annotea server, extended for multimedia content. Their access, retrieval and re-use is controlled via Shibboleth identity management and XACML access policies.
Resumo:
Allergies represent a significant medical and industrial problem. Molecular and clinical data on allergens are growing exponentially and in this article we have reviewed nine specialized allergen databases and identified data sources related to protein allergens contained in general purpose molecular databases. An analysis of allergens contained in public databases indicates a high level of redundancy of entries and a relatively low coverage of allergens by individual databases. From this analysis we identify current database needs for allergy research and, in particular, highlight the need for a centralized reference allergen database.
Resumo:
This paper presents the results of a study on the analysis of training needs regarding environmental (green) management and climate change topics in micro and small enterprises (MSEs) in Brazil and its implications on education for sustainable development. It reports on an e-mail survey of Brazilian small enterprises, whose results indicate that they are indeed interested in environmental management and climate change topics in an education for sustainable development context. The study indicates that proposals for courses on environmental management and climate change should follow a systemic perspective and take sustainable development into account. By applying factor analysis, it was found that the topics of interest can be grouped into thematic modules, which can be useful in the design of training courses for the top management leaders of those companies.
Resumo:
Mestrado em Intervenção Sócio-Organizacional na Saúde - Área de especialização: Qualidade e Tecnologias da Saúde.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores