241 resultados para RDF Reification
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The present work has for objective to analyze the issue of training environmental awareness and its role in contemporary society. With the alarming scenario of degradation and environmental imbalance , political, social and non-governmental institutions have established the urgent need for an education that make changes in social behavior in relation to the environment. With this design is establishing environmental education, however the economic , financial and social scenario in which is inserted dismantles its effectiveness , since the transformations of modernity incited alienation, reification , individualization , indifference and consumerism . In this juncture it is noticed that environmental education needs to be analyzed by the perspective of a man in critical reflection of the capitalist structure. Given this need , it is proposed to reading Italo Calvino's work , since it approach the whole context of modern man , with his ailments , anxiety, exploitation , selfishness and destructive action of itself, others and the environment in living
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
The reverse Monte Carlo (RMC) method generates sets of points in space which yield radial distribution functions (RDFS) that approximate those of the system of interest. Such sets of configurations should, in principle, be sufficient to determine the structural properties of the system. In this work we apply the RMC technique to fluids of hard diatomic molecules. The experimental RDFs of the hard-dimer fluid were generated by the conventional MC method and used as input in the RMC simulations. Our results indicate that the RMC method is only satisfactory in determining the local structure of the fluid studied by means of only mono-variable RDF. Also we suggest that the use of multi-variable RDFs would improve the technique significantly. However, the accuracy of the method turned out to be very sensitive to the variance of the input experimental RDF. © 1995.
Resumo:
Pós-graduação em Educação Escolar - FCLAR
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The Asteraceae, one of the largest families among angiosperms, is chemically characterised by the production of sesquiterpene lactones (SLs). A total of 1,111 SLs, which were extracted from 658 species, 161 genera, 63 subtribes and 15 tribes of Asteraceae, were represented and registered in two dimensions in the SISTEMATX, an in-house software system, and were associated with their botanical sources. The respective 11 block of descriptors: Constitutional, Functional groups, BCUT, Atom-centred, 2D autocorrelations, Topological, Geometrical, RDF, 3D-MoRSE, GETAWAY and WHIM were used as input data to separate the botanical occurrences through self-organising maps. Maps that were generated with each descriptor divided the Asteraceae tribes, with total index values between 66.7% and 83.6%. The analysis of the results shows evident similarities among the Heliantheae, Helenieae and Eupatorieae tribes as well as between the Anthemideae and Inuleae tribes. Those observations are in agreement with systematic classifications that were proposed by Bremer, which use mainly morphological and molecular data, therefore chemical markers partially corroborate with these classifications. The results demonstrate that the atom-centred and RDF descriptors can be used as a tool for taxonomic classification in low hierarchical levels, such as tribes. Descriptors obtained through fragments or by the two-dimensional representation of the SL structures were sufficient to obtain significant results, and better results were not achieved by using descriptors derived from three-dimensional representations of SLs. Such models based on physico-chemical properties can project new design SLs, similar structures from literature or even unreported structures in two-dimensional chemical space. Therefore, the generated SOMs can predict the most probable tribe where a biologically active molecule can be found according Bremer classification.
Resumo:
The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro-and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.
Resumo:
[ES] Uno de los cinco componentes de la arquitectura triskel, una base de datos NoSQL que trata de dar solución al problema de Big data de la web semántica, el gran número de identificadores de recursos que se necesitarían debido al creciente número de sitios web, concretamente el motor de gestión de ejecución de patrones basados en tripletas y en la tecnología RDF. Se encarga de recoger la petición de consulta por parte del intérprete, analizar los patrones que intervienen en la consulta en busca de dependencias explotables entre ellos, y así poder realizar la consulta con mayor rapidez además de ir resolviendo los diferentes patrones contra el almacenamiento, un TripleStore, y devolver el resultado de la petición en una tabla.
Resumo:
Generic programming is likely to become a new challenge for a critical mass of developers. Therefore, it is crucial to refine the support for generic programming in mainstream Object-Oriented languages — both at the design and at the implementation level — as well as to suggest novel ways to exploit the additional degree of expressiveness made available by genericity. This study is meant to provide a contribution towards bringing Java genericity to a more mature stage with respect to mainstream programming practice, by increasing the effectiveness of its implementation, and by revealing its full expressive power in real world scenario. With respect to the current research setting, the main contribution of the thesis is twofold. First, we propose a revised implementation for Java generics that greatly increases the expressiveness of the Java platform by adding reification support for generic types. Secondly, we show how Java genericity can be leveraged in a real world case-study in the context of the multi-paradigm language integration. Several approaches have been proposed in order to overcome the lack of reification of generic types in the Java programming language. Existing approaches tackle the problem of reification of generic types by defining new translation techniques which would allow for a runtime representation of generics and wildcards. Unfortunately most approaches suffer from several problems: heterogeneous translations are known to be problematic when considering reification of generic methods and wildcards. On the other hand, more sophisticated techniques requiring changes in the Java runtime, supports reified generics through a true language extension (where clauses) so that backward compatibility is compromised. In this thesis we develop a sophisticated type-passing technique for addressing the problem of reification of generic types in the Java programming language; this approach — first pioneered by the so called EGO translator — is here turned into a full-blown solution which reifies generic types inside the Java Virtual Machine (JVM) itself, thus overcoming both performance penalties and compatibility issues of the original EGO translator. Java-Prolog integration Integrating Object-Oriented and declarative programming has been the subject of several researches and corresponding technologies. Such proposals come in two flavours, either attempting at joining the two paradigms, or simply providing an interface library for accessing Prolog declarative features from a mainstream Object-Oriented languages such as Java. Both solutions have however drawbacks: in the case of hybrid languages featuring both Object-Oriented and logic traits, such resulting language is typically too complex, thus making mainstream application development an harder task; in the case of library-based integration approaches there is no true language integration, and some “boilerplate code” has to be implemented to fix the paradigm mismatch. In this thesis we develop a framework called PatJ which promotes seamless exploitation of Prolog programming in Java. A sophisticated usage of generics/wildcards allows to define a precise mapping between Object-Oriented and declarative features. PatJ defines a hierarchy of classes where the bidirectional semantics of Prolog terms is modelled directly at the level of the Java generic type-system.
Resumo:
The activity of the Ph.D. student Juri Luca De Coi involved the research field of policy languages and can be divided in three parts. The first part of the Ph.D. work investigated the state of the art in policy languages, ending up with: (i) identifying the requirements up-to-date policy languages have to fulfill; (ii) defining a policy language able to fulfill such requirements (namely, the Protune policy language); and (iii) implementing an infrastructure able to enforce policies expressed in the Protune policy language. The second part of the Ph.D. work focused on simplifying the activity of defining policies and ended up with: (i) identifying a subset of the controlled natural language ACE to express Protune policies; (ii) implementing a mapping between ACE policies and Protune policies; and (iii) adapting the ACE Editor to guide users step by step when defining ACE policies. The third part of the Ph.D. work tested the feasibility of the chosen approach by applying it to meaningful real-world problems, among which: (i) development of a security layer on top of RDF stores; and (ii) efficient policy-aware access to metadata stores. The research activity has been performed in tight collaboration with the Leibniz Universität Hannover and further European partners within the projects REWERSE, TENCompetence and OKKAM.
Resumo:
Il lavoro svolto in questa tesi è stato quello di analizzare il Web Semantico e i suoi linguaggi di rappresentazione delle informazioni. Inoltre sono state introdotte le ontologie evidenziando il loro ruolo all’interno del Web Semantico. Infine è stato fatto uno studio riguardo le ontologie attualmente sviluppate, allo scopo di portare a termine un’analisi comparativa delle stesse.
Resumo:
Obiettivo di questo lavoro di tesi è il perfezionamento di un sistema di Health Smart Home, ovvero un ambiente fisico (ad esempio un'abitazione) che incorpora una rete di comunicazione in grado di connettere apparecchi elettronici e servizi controllabili da remoto, con l'obiettivo di facilitare la vita ad anziani, malati o disabili nelle loro case. Questo lavoro di tesi mostrerà come è stato possibile realizzare tale sistema partendo dalle teorie e dalle tecnologie sviluppate per il Web Semantico, al fine di trasformare l'ambiente fisico in un Cyber Physical (Eco)System perfettamente funzionante.