846 resultados para Semantic Publishing, Linked Data, Bibliometrics, Informetrics, Data Retrieval, Citations
Resumo:
Viene presentato l’approccio Linked Data, che si serve di descrizioni scritte in linguaggio RDF per rendere espliciti ai calcolatori i legami semantici esistenti tra le risorse che popolano il Web. Si descrive quindi il progetto DBpedia, che si propone di riorganizzare le informazioni disponibili su Wikipedia in formato Linked Data, così da renderle più facilmente consultabili dall’utente e da rendere possibile l’esecuzione di query complesse. Si discute quindi della sfida riguardante l’integrazione di contenuti multimediali (immagini, file audio, video…) su DBpedia e si analizzano tre progetti rivolti in tal senso: Multipedia, DBpedia Commons e IMGpedia. Vengono infine sottolineate l’importanza e le potenzialità legate alla creazione di un Web Semantico.
Resumo:
v.4.The mechanization of data retrieval.--v.5.Emerging solutions for mechanizing the storage and retrieval of information.--v.6.The coming age of information technology.
Resumo:
We investigated whether a physiological marker of cardiovascular health, pulse pressure (PP), and age magnified the effect of the functional COMT Val158Met (rs4680) polymorphism on 15-years cognitive trajectories [episodic memory (EM), visuospatial ability, and semantic memory] using data from 1585 non-demented adults from the Betula study. A multiple-group latent growth curve model was specified to gauge individual differences in change, and average trends therein. The allelic variants showed negligible differences across the cognitive markers in average trends. The older portion of the sample selectively age-magnified the effects of Val158Met on EM changes, resulting in greater decline in Val compared to homozygote Met carriers. This effect was attenuated by statistical control for PP. Further, PP moderated the effects of COMT on 15-years EM trajectories, resulting in greater decline in Val carriers, even after accounting for the confounding effects of sex, education, cardiovascular diseases (diabetes, stroke, and hypertension), and chronological age, controlled for practice gains. The effect was still present after excluding individuals with a history of cardiovascular diseases. The effects of cognitive change were not moderated by any other covariates. This report underscores the importance of addressing synergistic effects in normal cognitive aging, as the addition thereof may place healthy individuals at greater risk for memory decline.
Resumo:
Multiresolution Triangular Mesh (MTM) models are widely used to improve the performance of large terrain visualization by replacing the original model with a simplified one. MTM models, which consist of both original and simplified data, are commonly stored in spatial database systems due to their size. The relatively slow access speed of disks makes data retrieval the bottleneck of such terrain visualization systems. Existing spatial access methods proposed to address this problem rely on main-memory MTM models, which leads to significant overhead during query processing. In this paper, we approach the problem from a new perspective and propose a novel MTM called direct mesh that is designed specifically for secondary storage. It supports available indexing methods natively and requires no modification to MTM structure. Experiment results, which are based on two real-world data sets, show an average performance improvement of 5-10 times over the existing methods.
Resumo:
Terrain can be approximated by a triangular mesh consisting millions of 3D points. Multiresolution triangular mesh (MTM) structures are designed to support applications that use terrain data at variable levels of detail (LOD). Typically, an MTM adopts a tree structure where a parent node represents a lower-resolution approximation of its descendants. Given a region of interest (ROI) and a LOD, the process of retrieving the required terrain data from the database is to traverse the MTM tree from the root to reach all the nodes satisfying the ROI and LOD conditions. This process, while being commonly used for multiresolution terrain visualization, is inefficient as either a large number of sequential I/O operations or fetching a large amount of extraneous data is incurred. Various spatial indexes have been proposed in the past to address this problem, however level-by-level tree traversal remains a common practice in order to obtain topological information among the retrieved terrain data. A new MTM data structure called direct mesh is proposed. We demonstrate that with direct mesh the amount of data retrieval can be substantially reduced. Comparing with existing MTM indexing methods, a significant performance improvement has been observed for real-life terrain data.
Resumo:
The increased data complexity and task interdependency associated with servitization represent significant barriers to its adoption. The outline of a business game is presented which demonstrates the increasing complexity of the management problem when moving through Base, Intermediate and Advanced levels of servitization. Linked data is proposed as an agile set of technologies, based on well established standards, for data exchange both in the game and more generally in supply chains.
Resumo:
The value of knowing about data availability and system accessibility is analyzed through theoretical models of Information Economics. When a user places an inquiry for information, it is important for the user to learn whether the system is not accessible or the data is not available, rather than not have any response. In reality, various outcomes can be provided by the system: nothing will be displayed to the user (e.g., a traffic light that does not operate, a browser that keeps browsing, a telephone that does not answer); a random noise will be displayed (e.g., a traffic light that displays random signals, a browser that provides disorderly results, an automatic voice message that does not clarify the situation); a special signal indicating that the system is not operating (e.g., a blinking amber indicating that the traffic light is down, a browser responding that the site is unavailable, a voice message regretting to tell that the service is not available). This article develops a model to assess the value of the information for the user in such situations by employing the information structure model prevailing in Information Economics. Examples related to data accessibility in centralized and in distributed systems are provided for illustration.
Resumo:
Today, the question of how to successfully reduce supply chain costs whilst increasing customer satisfaction continues to be the focus of many firms. It is noted in the literature that supply chain automation can increase flexibility whilst reducing inefficiencies. However, in the dynamic and process driven environment of distribution, there is the absence of a cohesive automation approach to guide companies in improving network competitiveness. This paper aims to address the gap in the literature by developing a three-level framework automation application approach with the assistance of radio frequency identification (RFID) technology and returnable transport equipment (RTE). The first level considers the automation of data retrieval and highlights the benefits of RFID. The second level consists of automating distribution processes such as unloading and assembling orders. As the labour is reduced with the introduction of RFID enabled robots, the balance between automation and labour is discussed. Finally, the third level is an analysis of the decision-making process at network points and the application of cognitive automation to objects. A distribution network scenario is formed and used to illustrate network reconfiguration at each level. The research pinpoints that RFID enabled RTE offers a viable tool to assist supply chain automation. Further research is proposed in particular, the area of cognitive automation to aide with decision-making.
Resumo:
Location systems have become increasingly part of people's lives. For outdoor environments, GPS appears as standard technology, widely disseminated and used. However, people usually spend most of their daily time in indoor environments, such as: hospitals, universities, factories, buildings, etc. In these environments, GPS does not work properly causing an inaccurate positioning. Currently, to perform the location of people or objects in indoor environments no single technology could reproduce for indoors the same result achieved by GPS for outdoors environments. Due to this, it is necessary to consider use of information from multiple sources using diferent technologies. Thus, this work aims to build an Adaptable Platform for Indoor location. Based on this goal, the IndoLoR platform is proposed. This platform aims to allow information reception from diferent sources, data processing, data fusion, data storage and data retrieval for the indoor location context.
Resumo:
Location systems have become increasingly part of people's lives. For outdoor environments, GPS appears as standard technology, widely disseminated and used. However, people usually spend most of their daily time in indoor environments, such as: hospitals, universities, factories, buildings, etc. In these environments, GPS does not work properly causing an inaccurate positioning. Currently, to perform the location of people or objects in indoor environments no single technology could reproduce for indoors the same result achieved by GPS for outdoors environments. Due to this, it is necessary to consider use of information from multiple sources using diferent technologies. Thus, this work aims to build an Adaptable Platform for Indoor location. Based on this goal, the IndoLoR platform is proposed. This platform aims to allow information reception from diferent sources, data processing, data fusion, data storage and data retrieval for the indoor location context.
Resumo:
A partir da filosofia pragmatista de William James a qual valoriza a noção de fragmentação e a junção disjuntiva de fragmentos, bem como a partir da filosofia francesa do pós-68 delineou-se a noção de documento como agenciamento permitindo assim traçar a evolução de protocolos para a descrição bibliográfica desde o AACR, passando pelo modelo conceitual FRBR, RDA e chegando à Web Semântica onde são identificadas estruturas rizomáticas de representação do conhecimento.
Resumo:
Maintaining accessibility to and understanding of digital information over time is a complex challenge that often requires contributions and interventions from a variety of individuals and organizations. The processes of preservation planning and evaluation are fundamentally implicit and share similar complexity. Both demand comprehensive knowledge and understanding of every aspect of to-be-preserved content and the contexts within which preservation is undertaken. Consequently, means are required for the identification, documentation and association of those properties of data, representation and management mechanisms that in combination lend value, facilitate interaction and influence the preservation process. These properties may be almost limitless in terms of diversity, but are integral to the establishment of classes of risk exposure, and the planning and deployment of appropriate preservation strategies. We explore several research objectives within the course of this thesis. Our main objective is the conception of an ontology for risk management of digital collections. Incorporated within this are our aims to survey the contexts within which preservation has been undertaken successfully, the development of an appropriate methodology for risk management, the evaluation of existing preservation evaluation approaches and metrics, the structuring of best practice knowledge and lastly the demonstration of a range of tools that utilise our findings. We describe a mixed methodology that uses interview and survey, extensive content analysis, practical case study and iterative software and ontology development. We build on a robust foundation, the development of the Digital Repository Audit Method Based on Risk Assessment. We summarise the extent of the challenge facing the digital preservation community (and by extension users and creators of digital materials from many disciplines and operational contexts) and present the case for a comprehensive and extensible knowledge base of best practice. These challenges are manifested in the scale of data growth, the increasing complexity and the increasing onus on communities with no formal training to offer assurances of data management and sustainability. These collectively imply a challenge that demands an intuitive and adaptable means of evaluating digital preservation efforts. The need for individuals and organisations to validate the legitimacy of their own efforts is particularly prioritised. We introduce our approach, based on risk management. Risk is an expression of the likelihood of a negative outcome, and an expression of the impact of such an occurrence. We describe how risk management may be considered synonymous with preservation activity, a persistent effort to negate the dangers posed to information availability, usability and sustainability. Risk can be characterised according to associated goals, activities, responsibilities and policies in terms of both their manifestation and mitigation. They have the capacity to be deconstructed into their atomic units and responsibility for their resolution delegated appropriately. We continue to describe how the manifestation of risks typically spans an entire organisational environment, and as the focus of our analysis risk safeguards against omissions that may occur when pursuing functional, departmental or role-based assessment. We discuss the importance of relating risk-factors, through the risks themselves or associated system elements. To do so will yield the preservation best-practice knowledge base that is conspicuously lacking within the international digital preservation community. We present as research outcomes an encapsulation of preservation practice (and explicitly defined best practice) as a series of case studies, in turn distilled into atomic, related information elements. We conduct our analyses in the formal evaluation of memory institutions in the UK, US and continental Europe. Furthermore we showcase a series of applications that use the fruits of this research as their intellectual foundation. Finally we document our results in a range of technical reports and conference and journal articles. We present evidence of preservation approaches and infrastructures from a series of case studies conducted in a range of international preservation environments. We then aggregate this into a linked data structure entitled PORRO, an ontology relating preservation repository, object and risk characteristics, intended to support preservation decision making and evaluation. The methodology leading to this ontology is outlined, and lessons are exposed by revisiting legacy studies and exposing the resource and associated applications to evaluation by the digital preservation community.
Bioqueries: a collaborative environment to create, explore and share SPARQL queries in Life Sciences
Resumo:
Bioqueries provides a collaborative environment to create, explore, execute, clone and share SPARQL queries (including Federated Queries). Federated SPARQL queries can retrieve information from more than one data source.
Resumo:
The analysis of doctoral theses conducted in a scientific field is one of the pillars for the status of the field and this has been raised within the project Mapping the Discipline History of Education. With this work we intent to broaden and deepen our previous studies in the field of Doctoral thesis in History of Education. We have already presented some results about Doctoral thesis focused in one particular subject (History of Education in Franco’s times) in 2013, and Doctoral thesis registered in the Spanish database for dissertations, TESEO, in 2000, 2005 and 2010 in 2016. Starting from the works already presented about the thesis in France, Switzerland, Portugal and Italy, the aim of that article was to study the thesis included in TESEO which have among their descriptors “History of education”. We have analyzed variables such as national or local character, the study period and the duration. In ISCHE 38 (Chicago 2016), we intend to analyze the Doctoral thesis presented in Spanish universities during a decade but focusing neither on a particular subject nor on a database. Thus the main differences with our earlier researches are the criteria: On the one hand, we are going to decide if a doctoral thesis belongs or not to our field, and on the other hand we are not going to use only a database but we will try to find the Doctoral thesis in any base, repository or source.