203 resultados para RDF
Resumo:
In order to fulfil European and Portuguese legal requirements, adequate alternatives to traditional municipal waste landfilling must be found namely concerning organic wastes and others susceptible of valorisation. According to the Portuguese Standard NP 4486:2008, refuse derived fuels (RDF) classification is based on three main parameters: lower heating value (considered as an economic parameter), chlorine content (considered as a technical parameter) and mercury content (considered as an environmental parameter). The purpose of this study was to characterize the rejected streams resulting from the mechanical treatment of unsorted municipal solid waste, from the plastic municipal selective collection and from the composting process, in order to evaluate their potential as RDF. To accomplish this purpose six sampling campaigns were performed. Chemical characterization comprised the proximate analysis – moisture content, volatile matter, ashes and fixed carbon, as well as trace elements. Physical characterization was also done. To evaluate their potential as RDF, the following parameters established in the Portuguese standard were also evaluated: heating value and chlorine content. As expected, results show that the refused stream from mechanical treatment is rather different from the selective collection rejected stream and from the rejected from the compost screening in terms of moisture, energetic matter and ashes, as well as heating value and chlorine. Preliminary data allows us to conclude that studied materials have a very interesting potential to be used as RDF. In fact, the rejected from selective collection and the one from composting have a heating value not very different from coal. Therefore, an important key factor may be the blending of these materials with others of higher heating values, after pre-processing, in order to get fuel pellets with good consistency, storage and handling characteristics and, therefore, combustion behavior.
Resumo:
The challenges of maintaining a building such as the Sydney Opera House are immense and are dependent upon a vast array of information. The value of information can be enhanced by its currency, accessibility and the ability to correlate data sets (integration of information sources). A building information model correlated to various information sources related to the facility is used as definition for a digital facility model. Such a digital facility model would give transparent and an integrated access to an array of datasets and obviously would support Facility Management processes. In order to construct such a digital facility model, two state-of-the-art Information and Communication technologies are considered: an internationally standardized building information model called the Industry Foundation Classes (IFC) and a variety of advanced communication and integration technologies often referred to as the Semantic Web such as the Resource Description Framework (RDF) and the Web Ontology Language (OWL). This paper reports on some technical aspects for developing a digital facility model focusing on Sydney Opera House. The proposed digital facility model enables IFC data to participate in an ontology driven, service-oriented software environment. A proof-of-concept prototype has been developed demonstrating the usability of IFC information to collaborate with Sydney Opera House’s specific data sources using semantic web ontologies.
Resumo:
Scientists need to transfer semantically similar queries across multiple heterogeneous linked datasets. These queries may require data from different locations and the results are not simple to combine due to differences between datasets. A query model was developed to make it simple to distribute queries across different datasets using RDF as the result format. The query model, based on the concept of publicly recognised namespaces for parts of each scientific dataset, was implemented with a configuration that includes a large number of current biological and chemical datasets. The configuration is flexible, providing the ability to transparently use both private and public datasets in any query. A prototype implementation of the model was used to resolve queries for the Bio2RDF website, including both Bio2RDF datasets and other datasets that do not follow the Bio2RDF URI conventions.
Resumo:
This thesis provides a query model suitable for context sensitive access to a wide range of distributed linked datasets which are available to scientists using the Internet. The model is designed based on scientific research standards which require scientists to provide replicable methods in their publications. Although there are query models available that provide limited replicability, they do not contextualise the process whereby different scientists select dataset locations based on their trust and physical location. In different contexts, scientists need to perform different data cleaning actions, independent of the overall query, and the model was designed to accommodate this function. The query model was implemented as a prototype web application and its features were verified through its use as the engine behind a major scientific data access site, Bio2RDF.org. The prototype showed that it was possible to have context sensitive behaviour for each of the three mirrors of Bio2RDF.org using a single set of configuration settings. The prototype provided executable query provenance that could be attached to scientific publications to fulfil replicability requirements. The model was designed to make it simple to independently interpret and execute the query provenance documents using context specific profiles, without modifying the original provenance documents. Experiments using the prototype as the data access tool in workflow management systems confirmed that the design of the model made it possible to replicate results in different contexts with minimal additions, and no deletions, to query provenance documents.
Resumo:
Due to the development of XML and other data models such as OWL and RDF, sharing data is an increasingly common task since these data models allow simple syntactic translation of data between applications. However, in order for data to be shared semantically, there must be a way to ensure that concepts are the same. One approach is to employ commonly usedschemas—called standard schemas —which help guarantee that syntactically identical objects have semantically similar meanings. As a result of the spread of data sharing, there has been widespread adoption of standard schemas in a broad range of disciplines and for a wide variety of applications within a very short period of time. However, standard schemas are still in their infancy and have not yet matured or been thoroughly evaluated. It is imperative that the data management research community takes a closer look at how well these standard schemas have fared in real-world applications to identify not only their advantages, but also the operational challenges that real users face. In this paper, we both examine the usability of standard schemas in a comparison that spans multiple disciplines, and describe our first step at resolving some of these issues in our Semantic Modeling System. We evaluate our Semantic Modeling System through a careful case study of the use of standard schemas in architecture, engineering, and construction, which we conducted with domain experts. We discuss how our Semantic Modeling System can help the broader problem and also discuss a number of challenges that still remain.
Resumo:
Queensland University of Technology (QUT) Library offers a range of resources and services to researchers as part of their research support portfolio. This poster will present key features of two of the data management services offered by research support staff at QUT Library. The first service is QUT Research Data Finder (RDF), a product of the Australian National Data Service (ANDS) funded Metadata Stores project. RDF is a data registry (metadata repository) that aims to publicise datasets that are research outputs arising from completed QUT research projects. The second is a software and code registry, which is currently under development with the sole purpose of improving discovery of source code and software as QUT research outputs. RESEARCH DATA FINDER As an integrated metadata repository, Research Data Finder aligns with institutional sources of truth, such as QUT’s research administration system, ResearchMaster, as well as QUT’s Academic Profiles system to provide high quality data descriptions that increase awareness of, and access to, shareable research data. The repository and its workflows are designed to foster better data management practices, enhance opportunities for collaboration and research, promote cross-disciplinary research and maximise the impact of existing research data sets. SOFTWARE AND CODE REGISTRY The QUT Library software and code registry project stems from concerns amongst researchers with regards to development activities, storage, accessibility, discoverability and impact, sharing, copyright and IP ownership of software and code. As a result, the Library is developing a registry for code and software research outputs, which will use existing Research Data Finder architecture. The underpinning software for both registries is VIVO, open source software developed by Cornell University. The registry will use the Research Data Finder service instance of VIVO and will include a searchable interface, links to code/software locations and metadata feeds to Research Data Australia. Key benefits of the project include:improving the discoverability and reuse of QUT researchers’ code and software amongst QUT and the QUT research community; increasing the profile of QUT research outputs on a national level by providing a metadata feed to Research Data Australia, and; improving the metrics for access and reuse of code and software in the repository.
Resumo:
Molecular dynamics simulations are reported on the structure and dynamics of n-decane and 3-methylpentane in zeolite NaY. We have calculated several properties such as the center of mass-center of mass rdf, the end-end distance distribution, bond angle distribution and dihedral angle distribution. We have also analysed trajectory to obtain diffusivity and velocity autocorrelation function (VACF). Surprisingly, the diffusivity of 3-methylpentane which is having larger cross-section perpendicular to the long molecular axis is higher than n-decane at 300 K. Activation energies have been obtained from simulations performed at 200 K, 300 K, 350 K, 400 K and 450 K in the NVE ensemble. These results can be understood in terms of the previously known levitation effect. Arrhenious plot has higher value of slope for n-decane (5 center dot 9 kJ/mol) than 3-methylpentane (3 center dot 7 kJ/mol) in agreement with the prediction of levitation effect.
Resumo:
In recent years new emphasis has been placed on problems of the environmental aspects of waste disposal, especially investigating alternatives to landfill, sea dumping and incineration. There is also a strong emphasis on clean, economic and efficient processes for electric power generation. These two topics may at first appear unrelated. Nevertheless, the technological advances are now such that a solution to both can be combined in a novel approach to power generation based on waste-derived fuels, including refuse-derived fuel (RDF) and sludge power (SP) by utilising a slagging gasifier and advance fuel technology (AFT). The most appropriate gasification technique for such waste utilisation is the British Gas/Lurgi (BGL) high pressure, fixed bed slagging gasifier where operation on a range of feedstocks has been well-documented. This gasifier is particularly amenable to briquette fuel feeding and, operating in an integrated gasification combined cycle mode (IGCC), is particularly advantageous. Here, the author details how this technology has been applied to Britain's first AFT-IGCC Power Station which is now under development at Fife Energy Ltd., in Scotland, the former British Gas Westfield Development Centre.
Resumo:
采用分子动力学方法模拟了铜-铝扩散焊过程,分析了理想平面铜-铝试件(001)晶面间扩散焊的过渡层厚度,并利用径向分布、键对分析方法分析了在不同的降温速率下过渡层的结构变化.降温速率大时,过渡层保持原有无序结构,降温速率小时,过渡层从无序结构向面心立方结构转变.还对扩散焊后的铜-铝试件进行了拉伸模拟,并与尺寸大小相近的单晶铜和单晶铝的拉伸模拟结果进行比较.结果发现焊接后的强度比单晶铝和单晶铜的强度都要小,最大应变值也小.
Resumo:
在非均匀布风流化床中进行了半焦与垃圾衍生燃料(RDF)的混合燃烧实验,研究了不同半焦混烧率对烟气组分的影响以及二者混烧的可行性与经济性.实验结果表明,随着半焦混烧率的增加,H2O(气态)、CO、HCl、NO及C3H6含量均呈下降趋势,而CO2、SO2含量则呈增加趋势.同时发现,在燃烧过程中增加Cl元素含量,可以促进NO的生成.
Resumo:
用分子动力学方法模拟了非晶Cu的形成过程,研究了非晶Cu力学性能的微观机理.用径向分布函数(RDF)和键对分析方法(PA),分析了快速冷却过程中系统内部结构的变化.非晶Cu径向分布函数第2个峰有明显的劈裂,键对分析表明,在快速降温过程中2331,2211和2101键对的增多是径向分布函数第2峰劈裂的原因.对非晶Cu进行拉伸、剪切加载模拟,结果表明,非晶Cu应力达到最大值之后,没有应力值的突降,宏观上出现类似塑性变形的行为.但是,非晶金属变形的微观机理不同于晶体.本文引入微观数密度统计分析,发现加载过程中微观密度呈现非均匀演化,这为理解宏观类似塑性行为的微观机理提供了线索和可能的分析方法.
Resumo:
More and more users aim at taking advantage of the existing Linked Open Data environment to formulate a query over a dataset and to then try to process the same query over different datasets, one after another, in order to obtain a broader set of answers. However, the heterogeneity of vocabularies used in the datasets on the one side, and the fact that the number of alignments among those datasets is scarce on the other, makes that querying task difficult for them. Considering this scenario we present in this paper a proposal that allows on demand translations of queries formulated over an original dataset, into queries expressed using the vocabulary of a targeted dataset. Our approach relieves users from knowing the vocabulary used in the targeted datasets and even more it considers situations where alignments do not exist or they are not suitable for the formulated query. Therefore, in order to favour the possibility of getting answers, sometimes there is no guarantee of obtaining a semantically equivalent translation. The core component of our proposal is a query rewriting model that considers a set of transformation rules devised from a pragmatic point of view. The feasibility of our scheme has been validated with queries defined in well known benchmarks and SPARQL endpoint logs, as the obtained results confirm.
Resumo:
Background: In recent years Galaxy has become a popular workflow management system in bioinformatics, due to its ease of installation, use and extension. The availability of Semantic Web-oriented tools in Galaxy, however, is limited. This is also the case for Semantic Web Services such as those provided by the SADI project, i.e. services that consume and produce RDF. Here we present SADI-Galaxy, a tool generator that deploys selected SADI Services as typical Galaxy tools. Results: SADI-Galaxy is a Galaxy tool generator: through SADI-Galaxy, any SADI-compliant service becomes a Galaxy tool that can participate in other out-standing features of Galaxy such as data storage, history, workflow creation, and publication. Galaxy can also be used to execute and combine SADI services as it does with other Galaxy tools. Finally, we have semi-automated the packing and unpacking of data into RDF such that other Galaxy tools can easily be combined with SADI services, plugging the rich SADI Semantic Web Service environment into the popular Galaxy ecosystem. Conclusions: SADI-Galaxy bridges the gap between Galaxy, an easy to use but "static" workflow system with a wide user-base, and SADI, a sophisticated, semantic, discovery-based framework for Web Services, thus benefiting both user communities.
Resumo:
This paper describes the development of an automated design optimization system that makes use of a high fidelity Reynolds-Averaged CFD analysis procedure to minimize the fan forcing and fan BOGV (bypass outlet guide vane) losses simultaneously taking into the account the down-stream pylon and RDF (radial drive fairing) distortions. The design space consists of the OGV's stagger angle, trailing-edge recambering, axial and circumferential positions leading to a variable pitch optimum design. An advanced optimization system called SOFT (Smart Optimisation for Turbomachinery) was used to integrate a number of pre-processor, simulation and in-house grid generation codes and postprocessor programs. A number of multi-objective, multi-point optimiztion were carried out by SOFT on a cluster of workstations and are reported herein.
Resumo:
Service-Oriented Architecture (SOA) and Web Services (WS) offer advanced flexibility and interoperability capabilities. However they imply significant performance overheads that need to be carefully considered. Supply Chain Management (SCM) and Traceability systems are an interesting domain for the use of WS technologies that are usually deemed to be too complex and unnecessary in practical applications, especially regarding security. This paper presents an externalized security architecture that uses the eXtensible Access Control Markup Language (XACML) authorization standard to enforce visibility restrictions on trace-ability data in a supply chain where multiple companies collaborate; the performance overheads are assessed by comparing 'raw' authorization implementations - Access Control Lists, Tokens, and RDF Assertions - with their XACML-equivalents. © 2012 IEEE.