996 resultados para Extensible stylesheet language - XSL
Resumo:
This paper presents a Focused Crawler in order to Get Semantic Web Resources (CSR). Structured data web are available in formats such as Extensible Markup Language (XML), Resource Description Framework (RDF) and Ontology Web Language (OWL) that can be used for processing. One of the main challenges for performing a manual search and download semantic web resources is that this task consumes a lot of time. Our research work propose a focused crawler which allow to download these resources automatically and store them on disk in order to have a collection that will be used for data processing. CRS consists of three layers: (a) The User Interface Layer, (b) The Focus Crawler Layer and (c) The Base Crawler Layer. CSR uses as a selection policie the Shark-Search method. CSR was conducted with two experiments. The first one starts on December 15 2012 at 7:11 am and ends on December 16 2012 at 4:01 were obtained 448,123,537 bytes of data. The CSR ends by itself after to analyze 80,4375 seeds with an unlimited depth. CSR got 16,576 semantic resources files where the 89 % was RDF, the 10 % was XML and the 1% was OWL. The second one was based on the Web Data Commons work of the Research Group Data and Web Science at the University of Mannheim and the Institute AIFB at the Karlsruhe Institute of Technology. This began at 4:46 am of June 2 2013 and 1:37 am June 9 2013. After 162.51 hours of execution the result was 285,279 semantic resources where predominated the XML resources with 99 % and OWL and RDF with 1 % each one.
Resumo:
Traditionally, geostatistical algorithms are contained within specialist GIS and spatial statistics software. Such packages are often expensive, with relatively complex user interfaces and steep learning curves, and cannot be easily integrated into more complex process chains. In contrast, Service Oriented Architectures (SOAs) promote interoperability and loose coupling within distributed systems, typically using XML (eXtensible Markup Language) and Web services. Web services provide a mechanism for a user to discover and consume a particular process, often as part of a larger process chain, with minimal knowledge of how it works. Wrapping current geostatistical algorithms with a Web service layer would thus increase their accessibility, but raises several complex issues. This paper discusses a solution to providing interoperable, automatic geostatistical processing through the use of Web services, developed in the INTAMAP project (INTeroperability and Automated MAPping). The project builds upon Open Geospatial Consortium standards for describing observations, typically used within sensor webs, and employs Geography Markup Language (GML) to describe the spatial aspect of the problem domain. Thus the interpolation service is extremely flexible, being able to support a range of observation types, and can cope with issues such as change of support and differing error characteristics of sensors (by utilising descriptions of the observation process provided by SensorML). XML is accepted as the de facto standard for describing Web services, due to its expressive capabilities which allow automatic discovery and consumption by ‘naive’ users. Any XML schema employed must therefore be capable of describing every aspect of a service and its processes. However, no schema currently exists that can define the complex uncertainties and modelling choices that are often present within geostatistical analysis. We show a solution to this problem, developing a family of XML schemata to enable the description of a full range of uncertainty types. These types will range from simple statistics, such as the kriging mean and variances, through to a range of probability distributions and non-parametric models, such as realisations from a conditional simulation. By employing these schemata within a Web Processing Service (WPS) we show a prototype moving towards a truly interoperable geostatistical software architecture.
Resumo:
Models are central tools for modern scientists and decision makers, and there are many existing frameworks to support their creation, execution and composition. Many frameworks are based on proprietary interfaces, and do not lend themselves to the integration of models from diverse disciplines. Web based systems, or systems based on web services, such as Taverna and Kepler, allow composition of models based on standard web service technologies. At the same time the Open Geospatial Consortium has been developing their own service stack, which includes the Web Processing Service, designed to facilitate the executing of geospatial processing - including complex environmental models. The current Open Geospatial Consortium service stack employs Extensible Markup Language as a default data exchange standard, and widely-used encodings such as JavaScript Object Notation can often only be used when incorporated with Extensible Markup Language. Similarly, no successful engagement of the Web Processing Service standard with the well-supported technologies of Simple Object Access Protocol and Web Services Description Language has been seen. In this paper we propose a pure Simple Object Access Protocol/Web Services Description Language processing service which addresses some of the issues with the Web Processing Service specication and brings us closer to achieving a degree of interoperability between geospatial models, and thus realising the vision of a useful 'model web'.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
In Model-Driven Engineering (MDE), the developer creates a model using a language such as Unified Modeling Language (UML) or UML for Real-Time (UML-RT) and uses tools such as Papyrus or Papyrus-RT that generate code for them based on the model they create. Tracing allows developers to get insights such as which events occur and timing information into their own application as it runs. We try to add monitoring capabilities using Linux Trace Toolkit: next generation (LTTng) to models created in UML-RT using Papyrus-RT. The implementation requires changing the code generator to add tracing statements for the events that the user wants to monitor to the generated code. We also change the makefile to automate the build process and we create an Extensible Markup Language (XML) file that allows developers to view their traces visually using Trace Compass, an Eclipse-based trace viewing tool. Finally, we validate our results using three models we create and trace.
Resumo:
Effective and efficient implementation of intelligent and/or recently emerged networked manufacturing systems require an enterprise level integration. The networked manufacturing offers several advantages in the current competitive atmosphere by way to reduce, by shortening manufacturing cycle time and maintaining the production flexibility thereby achieving several feasible process plans. The first step in this direction is to integrate manufacturing functions such as process planning and scheduling for multi-jobs in a network based manufacturing system. It is difficult to determine a proper plan that meets conflicting objectives simultaneously. This paper describes a mobile-agent based negotiation approach to integrate manufacturing functions in a distributed manner; and its fundamental framework and functions are presented. Moreover, ontology has been constructed by using the Protégé software which possesses the flexibility to convert knowledge into Extensible Markup Language (XML) schema of Web Ontology Language (OWL) documents. The generated XML schemas have been used to transfer information throughout the manufacturing network for the intelligent interoperable integration of product data models and manufacturing resources. To validate the feasibility of the proposed approach, an illustrative example along with varied production environments that includes production demand fluctuations is presented and compared the proposed approach performance and its effectiveness with evolutionary algorithm based Hybrid Dynamic-DNA (HD-DNA) algorithm. The results show that the proposed scheme is very effective and reasonably acceptable for integration of manufacturing functions.
Resumo:
O XBRL - eXtensible Business Report Language - é uma linguagem que está sendo implementada em vários países para divulgação das informações contábil-financeiras pela internet. Este artigo mostra o estado-da-arte do XBRL e como se deu sua evolução, bem como avalia o estágio atual do Brasil na divulgação de informações contábil-financeiras pela internet. Foi realizada uma pesquisa do tipo survey com empresas de capital aberto no Brasil. A pesquisa revelou uma forte aceitação do meio eletrônico para divulgação de informações financeiras e também que ainda é muito pequeno o conhecimento da linguagem XBRL no país e, conseqüentemente, menor ainda o número de entidades que já iniciaram formalmente os estudos para sua implementação. Mostrou ainda a inexistência de um padrão de divulgação de informações eletrônicas, tendo predominado os formatos PDF, HTML e DOC, o que dificulta a análise e comparação de informações entre órgãos reguladores e com o público em geral.
Resumo:
The strategic management of information plays a fundamental role in the organizational management process since the decision-making process depend on the need for survival in a highly competitive market. Companies are constantly concerned about information transparency and good practices of corporate governance (CG) which, in turn, directs relations between the controlling power of the company and investors. In this context, this article presents the relationship between the disclosing of information of joint-stock companies by means of using XBRL, the open data model adopted by the Brazilian government, a model that boosted the publication of Information Access Law (Lei de Acesso à Informação), nº 12,527 of 18 November 2011. Information access should be permeated by a mediation policy in order to subsidize the knowledge construction and decision-making of investors. The XBRL is the main model for the publishing of financial information. The use of XBRL by means of new semantic standard created for Linked Data, strengthens the information dissemination, as well as creates analysis mechanisms and cross-referencing of data with different open databases available on the Internet, providing added value to the data/information accessed by civil society.
Resumo:
The content of a Learning Object is frequently characterized by metadata from several standards, such as LOM, SCORM and QTI. Specialized domains require new application profiles that further complicate the task of editing the metadata of learning object since their data models are not supported by existing authoring tools. To cope with this problem we designed a metadata editor supporting multiple metadata languages, each with its own data model. It is assumed that the supported languages have an XML binding and we use RDF to create a common metadata representation, independent from the syntax of each metadata languages. The combined data model supported by the editor is defined as an ontology. Thus, the process of extending the editor to support a new metadata language is twofold: firstly, the conversion from the XML binding of the metadata language to RDF and vice-versa; secondly, the extension of the ontology to cover the new metadata model. In this paper we describe the general architecture of the editor, we explain how a typical metadata language for learning objects is represented as an ontology, and how this formalization captures all the data required to generate the graphical user interface of the editor.
Resumo:
Written text is an important component in the process of knowledge acquisition and communication. Poorly written text fails to deliver clear ideas to the reader no matter how revolutionary and ground-breaking these ideas are. Providing text with good writing style is essential to transfer ideas smoothly. While we have sophisticated tools to check for stylistic problems in program code, we do not apply the same techniques for written text. In this paper we present TextLint, a rule-based tool to check for common style errors in natural language. TextLint provides a structural model of written text and an extensible rule-based checking mechanism.
Resumo:
We describe some of the novel aspects and motivations behind the design and implementation of the Ciao multiparadigm programming system. An important aspect of Ciao is that it provides the programmer with a large number of useful features from different programming paradigms and styles, and that the use of each of these features can be turned on and off at will for each program module. Thus, a given module may be using e.g. higher order functions and constraints, while another module may be using objects, predicates, and concurrency. Furthermore, the language is designed to be extensible in a simple and modular way. Another important aspect of Ciao is its programming environment, which provides a powerful preprocessor (with an associated assertion language) capable of statically finding non-trivial bugs, verifying that programs comply with specifications, and performing many types of program optimizations. Such optimizations produce code that is highly competitive with other dynamic languages or, when the highest levéis of optimization are used, even that of static languages, all while retaining the interactive development environment of a dynamic language. The environment also includes a powerful auto-documenter. The paper provides an informal overview of the language and program development environment. It aims at illustrating the design philosophy rather than at being exhaustive, which would be impossible in the format of a paper, pointing instead to the existing literature on the system.
Resumo:
PURPOSE: To determine the association between language and number of citations of ophthalmology articles published in Brazilian journals. METHODS: This study was a systematic review. Original articles were identified by review of documents published at the two Brazilian ophthalmology journals indexed at Science Citation Index Expanded - SCIE [Arquivos Brasileiros de Oftalmologia (ABO) and Revista Brasileira de Oftalmologia (RBO)]. All document types (articles and reviews) listed at SCIE in English (English Group) or in Portuguese (Portuguese Group) from January 1, 2008 to December 31, 2009 were included, except: editorial materials; corrections; letters; and biographical items. The primary outcome was the number of citations through the end of second year after publication date. Subgroup analysis included likelihood of citation (cited at least once versus no citation), journal, and year of publication. RESULTS: The search at the web of science revealed 382 articles [107 (28%) in the English Group and 275 (72%) in the Portuguese Group]. Of those, 297 (77.7%) were published at the ABO and 85 (23.3%) at the RBO. The citation counts were statistically significantly higher (P<0.001) in the English Group (1.51 - SD 1.98 - range 0 to 11) compared with the Portuguese Group (0.57 - SD 1.06 - range 0 to 7). The likelihood citation was statistically significant higher (P<0.001) in the English Group (70/107 - 65.4%) compared with the Portuguese Group (89/275 - 32.7%). There were more articles published in English at the ABO (98/297 - 32.9%) than at the RBO (9/85 - 10.6%) [P<0.001]. There were no significant difference (P=0.967) at the proportion of articles published in English at the years 2008 (48/172 - 27.9%) and 2009 (59/210 - 28.1%). CONCLUSION: The number of citations of articles published in Portuguese at Brazilian ophthalmology journals is lower than the published in English. The results of this study suggest that the editorial boards should strongly encourage the authors to adopt English as the main language in their future articles.
Resumo:
The objective of this study is to describe preliminary results from the cross-cultural adaptation of the Quality of Life Assessment Questionnaire, used to measure health related quality of life (HRQL) in Brazilian children aged between 5 and 11 with HIV/AIDS. The cross-cultural model evaluated the Concept, Item, Semantic and Measurement Equivalences (internal consistency and intra-observer reliability). Evaluation of the conceptual, item, semantic equivalences showed that the Portuguese version is pertinent for the Brazilian context. Four of seven domains showed internal consistency above 0.70 (α: 0.76-0.90) and five of seven revealed intra-observer reliability (ricc: 0.41-0.70). This first Portuguese version of the HRQL questionnaire can be understood as a valuable tool for assessing children's HRQL, but further studies with large samples and more robust analyses are recommended before use in the Brazilian context.
Resumo:
In Natural Language Processing (NLP) symbolic systems, several linguistic phenomena, for instance, the thematic role relationships between sentence constituents, such as AGENT, PATIENT, and LOCATION, can be accounted for by the employment of a rule-based grammar. Another approach to NLP concerns the use of the connectionist model, which has the benefits of learning, generalization and fault tolerance, among others. A third option merges the two previous approaches into a hybrid one: a symbolic thematic theory is used to supply the connectionist network with initial knowledge. Inspired on neuroscience, it is proposed a symbolic-connectionist hybrid system called BIO theta PRED (BIOlogically plausible thematic (theta) symbolic-connectionist PREDictor), designed to reveal the thematic grid assigned to a sentence. Its connectionist architecture comprises, as input, a featural representation of the words (based on the verb/noun WordNet classification and on the classical semantic microfeature representation), and, as output, the thematic grid assigned to the sentence. BIO theta PRED is designed to ""predict"" thematic (semantic) roles assigned to words in a sentence context, employing biologically inspired training algorithm and architecture, and adopting a psycholinguistic view of thematic theory.