885 resultados para Semantic Publishing, Linked Data, Bibliometrics, Informetrics, Data Retrieval, Citations
Resumo:
The increased data complexity and task interdependency associated with servitization represent significant barriers to its adoption. The outline of a business game is presented which demonstrates the increasing complexity of the management problem when moving through Base, Intermediate and Advanced levels of servitization. Linked data is proposed as an agile set of technologies, based on well established standards, for data exchange both in the game and more generally in supply chains.
Resumo:
The value of knowing about data availability and system accessibility is analyzed through theoretical models of Information Economics. When a user places an inquiry for information, it is important for the user to learn whether the system is not accessible or the data is not available, rather than not have any response. In reality, various outcomes can be provided by the system: nothing will be displayed to the user (e.g., a traffic light that does not operate, a browser that keeps browsing, a telephone that does not answer); a random noise will be displayed (e.g., a traffic light that displays random signals, a browser that provides disorderly results, an automatic voice message that does not clarify the situation); a special signal indicating that the system is not operating (e.g., a blinking amber indicating that the traffic light is down, a browser responding that the site is unavailable, a voice message regretting to tell that the service is not available). This article develops a model to assess the value of the information for the user in such situations by employing the information structure model prevailing in Information Economics. Examples related to data accessibility in centralized and in distributed systems are provided for illustration.
Resumo:
Today, the question of how to successfully reduce supply chain costs whilst increasing customer satisfaction continues to be the focus of many firms. It is noted in the literature that supply chain automation can increase flexibility whilst reducing inefficiencies. However, in the dynamic and process driven environment of distribution, there is the absence of a cohesive automation approach to guide companies in improving network competitiveness. This paper aims to address the gap in the literature by developing a three-level framework automation application approach with the assistance of radio frequency identification (RFID) technology and returnable transport equipment (RTE). The first level considers the automation of data retrieval and highlights the benefits of RFID. The second level consists of automating distribution processes such as unloading and assembling orders. As the labour is reduced with the introduction of RFID enabled robots, the balance between automation and labour is discussed. Finally, the third level is an analysis of the decision-making process at network points and the application of cognitive automation to objects. A distribution network scenario is formed and used to illustrate network reconfiguration at each level. The research pinpoints that RFID enabled RTE offers a viable tool to assist supply chain automation. Further research is proposed in particular, the area of cognitive automation to aide with decision-making.
Resumo:
Location systems have become increasingly part of people's lives. For outdoor environments, GPS appears as standard technology, widely disseminated and used. However, people usually spend most of their daily time in indoor environments, such as: hospitals, universities, factories, buildings, etc. In these environments, GPS does not work properly causing an inaccurate positioning. Currently, to perform the location of people or objects in indoor environments no single technology could reproduce for indoors the same result achieved by GPS for outdoors environments. Due to this, it is necessary to consider use of information from multiple sources using diferent technologies. Thus, this work aims to build an Adaptable Platform for Indoor location. Based on this goal, the IndoLoR platform is proposed. This platform aims to allow information reception from diferent sources, data processing, data fusion, data storage and data retrieval for the indoor location context.
Resumo:
Location systems have become increasingly part of people's lives. For outdoor environments, GPS appears as standard technology, widely disseminated and used. However, people usually spend most of their daily time in indoor environments, such as: hospitals, universities, factories, buildings, etc. In these environments, GPS does not work properly causing an inaccurate positioning. Currently, to perform the location of people or objects in indoor environments no single technology could reproduce for indoors the same result achieved by GPS for outdoors environments. Due to this, it is necessary to consider use of information from multiple sources using diferent technologies. Thus, this work aims to build an Adaptable Platform for Indoor location. Based on this goal, the IndoLoR platform is proposed. This platform aims to allow information reception from diferent sources, data processing, data fusion, data storage and data retrieval for the indoor location context.
Resumo:
A partir da filosofia pragmatista de William James a qual valoriza a noção de fragmentação e a junção disjuntiva de fragmentos, bem como a partir da filosofia francesa do pós-68 delineou-se a noção de documento como agenciamento permitindo assim traçar a evolução de protocolos para a descrição bibliográfica desde o AACR, passando pelo modelo conceitual FRBR, RDA e chegando à Web Semântica onde são identificadas estruturas rizomáticas de representação do conhecimento.
Resumo:
Maintaining accessibility to and understanding of digital information over time is a complex challenge that often requires contributions and interventions from a variety of individuals and organizations. The processes of preservation planning and evaluation are fundamentally implicit and share similar complexity. Both demand comprehensive knowledge and understanding of every aspect of to-be-preserved content and the contexts within which preservation is undertaken. Consequently, means are required for the identification, documentation and association of those properties of data, representation and management mechanisms that in combination lend value, facilitate interaction and influence the preservation process. These properties may be almost limitless in terms of diversity, but are integral to the establishment of classes of risk exposure, and the planning and deployment of appropriate preservation strategies. We explore several research objectives within the course of this thesis. Our main objective is the conception of an ontology for risk management of digital collections. Incorporated within this are our aims to survey the contexts within which preservation has been undertaken successfully, the development of an appropriate methodology for risk management, the evaluation of existing preservation evaluation approaches and metrics, the structuring of best practice knowledge and lastly the demonstration of a range of tools that utilise our findings. We describe a mixed methodology that uses interview and survey, extensive content analysis, practical case study and iterative software and ontology development. We build on a robust foundation, the development of the Digital Repository Audit Method Based on Risk Assessment. We summarise the extent of the challenge facing the digital preservation community (and by extension users and creators of digital materials from many disciplines and operational contexts) and present the case for a comprehensive and extensible knowledge base of best practice. These challenges are manifested in the scale of data growth, the increasing complexity and the increasing onus on communities with no formal training to offer assurances of data management and sustainability. These collectively imply a challenge that demands an intuitive and adaptable means of evaluating digital preservation efforts. The need for individuals and organisations to validate the legitimacy of their own efforts is particularly prioritised. We introduce our approach, based on risk management. Risk is an expression of the likelihood of a negative outcome, and an expression of the impact of such an occurrence. We describe how risk management may be considered synonymous with preservation activity, a persistent effort to negate the dangers posed to information availability, usability and sustainability. Risk can be characterised according to associated goals, activities, responsibilities and policies in terms of both their manifestation and mitigation. They have the capacity to be deconstructed into their atomic units and responsibility for their resolution delegated appropriately. We continue to describe how the manifestation of risks typically spans an entire organisational environment, and as the focus of our analysis risk safeguards against omissions that may occur when pursuing functional, departmental or role-based assessment. We discuss the importance of relating risk-factors, through the risks themselves or associated system elements. To do so will yield the preservation best-practice knowledge base that is conspicuously lacking within the international digital preservation community. We present as research outcomes an encapsulation of preservation practice (and explicitly defined best practice) as a series of case studies, in turn distilled into atomic, related information elements. We conduct our analyses in the formal evaluation of memory institutions in the UK, US and continental Europe. Furthermore we showcase a series of applications that use the fruits of this research as their intellectual foundation. Finally we document our results in a range of technical reports and conference and journal articles. We present evidence of preservation approaches and infrastructures from a series of case studies conducted in a range of international preservation environments. We then aggregate this into a linked data structure entitled PORRO, an ontology relating preservation repository, object and risk characteristics, intended to support preservation decision making and evaluation. The methodology leading to this ontology is outlined, and lessons are exposed by revisiting legacy studies and exposing the resource and associated applications to evaluation by the digital preservation community.
Bioqueries: a collaborative environment to create, explore and share SPARQL queries in Life Sciences
Resumo:
Bioqueries provides a collaborative environment to create, explore, execute, clone and share SPARQL queries (including Federated Queries). Federated SPARQL queries can retrieve information from more than one data source.
Resumo:
The analysis of doctoral theses conducted in a scientific field is one of the pillars for the status of the field and this has been raised within the project Mapping the Discipline History of Education. With this work we intent to broaden and deepen our previous studies in the field of Doctoral thesis in History of Education. We have already presented some results about Doctoral thesis focused in one particular subject (History of Education in Franco’s times) in 2013, and Doctoral thesis registered in the Spanish database for dissertations, TESEO, in 2000, 2005 and 2010 in 2016. Starting from the works already presented about the thesis in France, Switzerland, Portugal and Italy, the aim of that article was to study the thesis included in TESEO which have among their descriptors “History of education”. We have analyzed variables such as national or local character, the study period and the duration. In ISCHE 38 (Chicago 2016), we intend to analyze the Doctoral thesis presented in Spanish universities during a decade but focusing neither on a particular subject nor on a database. Thus the main differences with our earlier researches are the criteria: On the one hand, we are going to decide if a doctoral thesis belongs or not to our field, and on the other hand we are not going to use only a database but we will try to find the Doctoral thesis in any base, repository or source.
Resumo:
En este artículo se presenta la ontología OntPersonal, una ontología de personalización para la aplicación ITINER@, un sistema generador de rutas turísticas basado en información semántica. La ontología OntPersonal modela un conjunto de preferencias turísticas y restricciones de contexto asociadas al usuario final (turista), lo que se denomina su perfil. A partir de un juego de reglas SWRL se intentan inferir los puntos de interés (POI o visitables) -obtenidos de una ontología externa instanciada- más relevantes para cada perfil. Esta información, aunada a otras consideraciones, podría utilizarse por el sistema ITINER@ para construir rutas turísticas personalizadas. En este trabajo se presentan los resultados obtenidos al evaluar la ontología usando POI pertenecientes a la región de Esterri d'Àneu en Cataluña, España.
Resumo:
The title of the study is ''Toxicology Literature: An Informetric Analysis".In the field of Toxicology, the interdisciplinary research resulted in 'information fragmentation' of the basic subject to environmental, medical and economic toxicology. The interest in collaborative research resulted in the transdisciplinary growth of Toxicology which ultimately resulted in the scatter of literature.For the purpose of present study Toxicology is defined as the physical and chemical aspects of all poisons affecting environmental, economical and medical aspects of human life. Informetrics is "the use and development of a variety of measures to study and analyse several properties of information in general and documents in particular."The present study fled light on the main fields of Toxicology research as well as the important primary journals through which the results are being published. The authorshippattern, subject-wise scatter, country-wise, language-wise and growth pattern, self-citation, bibliographic coupling of the journals were studied. The study will be of great use in forrnulatinq the acquisition policy of documents in a library. The present study is useful in identifying obsolate journals so that they can be discarded from the collection
Resumo:
Lavoro svolto per la creazione di una rete citazionale a partire da articoli scientifici codificati in XML JATS. Viene effettuata un'introduzione sul semantic publishing, le ontologie di riferimento e i principali dataset su pubblicazioni scientifiche. Infine viene presentato il prototipo CiNeX che si occupa di estrarre da un dataset in XML JATS un grafo RDF utilizzando l'ontologia SPAR.
Resumo:
Un'applicazione web user-friendly di supporto ai ricercatori per l'esecuzione efficiente di specifici tasks di ricerca e analisi di articoli scientifici
Resumo:
A workflow-centric research object bundles a workflow, the provenance of the results obtained by its enactment, other digital objects that are relevant for the experiment (papers, datasets, etc.), and annotations that semantically describe all these objects. In this paper, we propose a model to specify workflow-centric research objects, and show how the model can be grounded using semantic technologies and existing vocabularies, in particular the Object Reuse and Exchange (ORE) model and the Annotation Ontology (AO).We describe the life-cycle of a research object, which resembles the life-cycle of a scienti?c experiment.
Resumo:
With the rise of smart phones, lifelogging devices (e.g. Google Glass) and popularity of image sharing websites (e.g. Flickr), users are capturing and sharing every aspect of their life online producing a wealth of visual content. Of these uploaded images, the majority are poorly annotated or exist in complete semantic isolation making the process of building retrieval systems difficult as one must firstly understand the meaning of an image in order to retrieve it. To alleviate this problem, many image sharing websites offer manual annotation tools which allow the user to “tag” their photos, however, these techniques are laborious and as a result have been poorly adopted; Sigurbjörnsson and van Zwol (2008) showed that 64% of images uploaded to Flickr are annotated with < 4 tags. Due to this, an entire body of research has focused on the automatic annotation of images (Hanbury, 2008; Smeulders et al., 2000; Zhang et al., 2012a) where one attempts to bridge the semantic gap between an image’s appearance and meaning e.g. the objects present. Despite two decades of research the semantic gap still largely exists and as a result automatic annotation models often offer unsatisfactory performance for industrial implementation. Further, these techniques can only annotate what they see, thus ignoring the “bigger picture” surrounding an image (e.g. its location, the event, the people present etc). Much work has therefore focused on building photo tag recommendation (PTR) methods which aid the user in the annotation process by suggesting tags related to those already present. These works have mainly focused on computing relationships between tags based on historical images e.g. that NY and timessquare co-exist in many images and are therefore highly correlated. However, tags are inherently noisy, sparse and ill-defined often resulting in poor PTR accuracy e.g. does NY refer to New York or New Year? This thesis proposes the exploitation of an image’s context which, unlike textual evidences, is always present, in order to alleviate this ambiguity in the tag recommendation process. Specifically we exploit the “what, who, where, when and how” of the image capture process in order to complement textual evidences in various photo tag recommendation and retrieval scenarios. In part II, we combine text, content-based (e.g. # of faces present) and contextual (e.g. day-of-the-week taken) signals for tag recommendation purposes, achieving up to a 75% improvement to precision@5 in comparison to a text-only TF-IDF baseline. We then consider external knowledge sources (i.e. Wikipedia & Twitter) as an alternative to (slower moving) Flickr in order to build recommendation models on, showing that similar accuracy could be achieved on these faster moving, yet entirely textual, datasets. In part II, we also highlight the merits of diversifying tag recommendation lists before discussing at length various problems with existing automatic image annotation and photo tag recommendation evaluation collections. In part III, we propose three new image retrieval scenarios, namely “visual event summarisation”, “image popularity prediction” and “lifelog summarisation”. In the first scenario, we attempt to produce a rank of relevant and diverse images for various news events by (i) removing irrelevant images such memes and visual duplicates (ii) before semantically clustering images based on the tweets in which they were originally posted. Using this approach, we were able to achieve over 50% precision for images in the top 5 ranks. In the second retrieval scenario, we show that by combining contextual and content-based features from images, we are able to predict if it will become “popular” (or not) with 74% accuracy, using an SVM classifier. Finally, in chapter 9 we employ blur detection and perceptual-hash clustering in order to remove noisy images from lifelogs, before combining visual and geo-temporal signals in order to capture a user’s “key moments” within their day. We believe that the results of this thesis show an important step towards building effective image retrieval models when there lacks sufficient textual content (i.e. a cold start).