12 resultados para repositories
em CentAUR: Central Archive University of Reading - UK
Resumo:
The P-found protein folding and unfolding simulation repository is designed to allow scientists to perform analyses across large, distributed simulation data sets. There are two storage components in P-found: a primary repository of simulation data and a data warehouse. Here we demonstrate how grid technologies can support multiple, distributed P-found installations. In particular we look at two aspects, first how grid data management technologies can be used to access the distributed data warehouses; and secondly, how the grid can be used to transfer analysis programs to the primary repositories --- this is an important and challenging aspect of P-found because the data volumes involved are too large to be centralised. The grid technologies we are developing with the P-found system will allow new large data sets of protein folding simulations to be accessed and analysed in novel ways, with significant potential for enabling new scientific discoveries.
Resumo:
The P-found protein folding and unfolding simulation repository is designed to allow scientists to perform data mining and other analyses across large, distributed simulation data sets. There are two storage components in P-found: a primary repository of simulation data that is used to populate the second component, and a data warehouse that contains important molecular properties. These properties may be used for data mining studies. Here we demonstrate how grid technologies can support multiple, distributed P-found installations. In particular, we look at two aspects: firstly, how grid data management technologies can be used to access the distributed data warehouses; and secondly, how the grid can be used to transfer analysis programs to the primary repositories — this is an important and challenging aspect of P-found, due to the large data volumes involved and the desire of scientists to maintain control of their own data. The grid technologies we are developing with the P-found system will allow new large data sets of protein folding simulations to be accessed and analysed in novel ways, with significant potential for enabling scientific discovery.
Resumo:
Learning Objects offer flexibility and adaptability for users to request personalised information for learning. There are standards to guide the development of learning objects. However, individual developers may customise these standards for serving different purposes when defining, describing, managing and providing learning objects, which are normally stored in heterogeneous repositories. Barriers to interoperability hinder sharing of learning services and subsequently affect quality of instructional design as learners expect to be able to receive their personalised learning content. All these impose difficulties to the users in getting the right information from the right sources. This paper investigates the interoperability issues in eLearning services management and provision and presents an approach to resolve interoperability at three levels.
Resumo:
Collaborative software is usually thought of as providing audio-video conferencing services, application/desktop sharing, and access to large content repositories. However mobile device usage is characterized by users carrying out short and intermittent tasks sometimes referred to as 'micro-tasking'. Micro-collaborations are not well supported by traditional groupware systems and the work in this paper seeks out to address this. Mico is a system that provides a set of application level peer-to-peer services for the ad-hoc formation and facilitation of collaborative groups across a diverse mobile device domain. The system builds on the Java ME bindings of the JXTA P2P protocols, and is designed with an approach to use the lowest common denominators that are required for collaboration between varying degrees of mobile device capability. To demonstrate how our platform facilitates application development, we built an exemplary set of demonstration applications and include code examples here to illustrate the ease and speed afforded when developing collaborative software with Mico.
Resumo:
Many projects, e.g. VIKEF [13] and KIM [7], present grounded approaches for the use of entities as a means of indexing and retrieval of multimedia resources from heterogeneous sources. In this paper, we discuss the state-of-the-art of entity-centric approaches for multimedia indexing and retrieval. A summary of projects employing entity-centric repositories are portrayed. This paper also looks at the current state-of-the-art authoring environment, Macromedia Authorware, and the possibility of potential extension of this environment for entity-based multimedia authoring.
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of digital data. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. Research has been ongoing for a few years in the field of ontological engineering with the aim of using ontologies to add knowledge to information. In this paper we describe the architecture of a system designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval.
Resumo:
Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.
Resumo:
Information provision to address the changing requirements can be best supported by content management. The Current information technology enables information to be stored and provided from various distributed sources. To identify and retrieve relevant information requires effective mechanisms for information discovery and assembly. This paper presents a method, which enables the design of such mechanisms, with a set of techniques for articulating and profiling users' requirements, formulating information provision specifications, realising management of information content in repositories, and facilitating response to the user's requirements dynamically during the process of knowledge construction. These functions are represented in an ontology which integrates the capability of the mechanisms. The ontological modelling in this paper has adopted semiotics principles with embedded norms to ensure coherent course of actions represented in these mechanisms. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The knowledge economy offers opportunity to a broad and diverse community of information systems users to efficiently gain information and know-how for improving qualifications and enhancing productivity in the work place. Such demand will continue and users will frequently require optimised and personalised information content. The advancement of information technology and the wide dissemination of information endorse individual users when constructing new knowledge from their experience in the real-world context. However, a design of personalised information provision is challenging because users’ requirements and information provision specifications are complex in their representation. The existing methods are not able to effectively support this analysis process. This paper presents a mechanism which can holistically facilitate customisation of information provision based on individual users’ goals, level of knowledge and cognitive styles preferences. An ontology model with embedded norms represents the domain knowledge of information provision in a specific context where users’ needs can be articulated and represented in a user profile. These formal requirements can then be transformed onto information provision specifications which are used to discover suitable information content from repositories and pedagogically organise the selected content to meet the users’ needs. The method is provided with adaptability which enables an appropriate response to changes in users’ requirements during the process of acquiring knowledge and skills.
Resumo:
Climate modeling is a complex process, requiring accurate and complete metadata in order to identify, assess and use climate data stored in digital repositories. The preservation of such data is increasingly important given the development of ever-increasingly complex models to predict the effects of global climate change. The EU METAFOR project has developed a Common Information Model (CIM) to describe climate data and the models and modelling environments that produce this data. There is a wide degree of variability between different climate models and modelling groups. To accommodate this, the CIM has been designed to be highly generic and flexible, with extensibility built in. METAFOR describes the climate modelling process simply as "an activity undertaken using software on computers to produce data." This process has been described as separate UML packages (and, ultimately, XML schemas). This fairly generic structure canbe paired with more specific "controlled vocabularies" in order to restrict the range of valid CIM instances. The CIM will aid digital preservation of climate models as it will provide an accepted standard structure for the model metadata. Tools to write and manage CIM instances, and to allow convenient and powerful searches of CIM databases,. Are also under development. Community buy-in of the CIM has been achieved through a continual process of consultation with the climate modelling community, and through the METAFOR team’s development of a questionnaire that will be used to collect the metadata for the Intergovernmental Panel on Climate Change’s (IPCC) Coupled Model Intercomparison Project Phase 5 (CMIP5) model runs.
Resumo:
LRRK2 was identified in 2004 as the causative protein product of the Parkinson’s disease locus designated PARK8. In the decade since then, genetic studies have revealed at least 6 dominant mutations in LRRK2 linked to Parkinson’s disease, alongside one associated with cancer. It is now well established that coding changes in LRRK2 are one of the most common causes of Parkinson’s. Genome-wide association studies (GWAs) have, more recently, reported single nucleotide polymorphisms (SNPs) around the LRRK2 locus to be associated with risk of developing sporadic Parkinson’s disease and inflammatory bowel disorder. The functional research that has followed these genetic breakthroughs has generated an extensive literature regarding LRRK2 pathophysiology; however, there is still no consensus as to the biological function of LRRK2. To provide insight into the aspects of cell biology that are consistently related to LRRK2 activity, we analysed the plethora of candidate LRRK2 interactors available through the BioGRID and IntAct data repositories. We then performed GO terms enrichment for the LRRK2 interactome. We found that, in two different enrichment portals, the LRRK2 interactome was associated with terms referring to transport, cellular organization, vesicles and the cytoskeleton. We also verified that 21 of the LRRK2 interactors are genetically linked to risk for Parkin- son’s disease or inflammatory bowel disorder. The implications of these findings are discussed, with particular regard to potential novel areas of investigation.