971 resultados para extensible hypertext markup language
Resumo:
This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.
Resumo:
In this work, we propose a Geographical Information System that can be used as a tool for the treatment and study of problems related with environmental and city management issues. It is based on the Scalable Vector Graphics (SVG) standard for Web development of graphics. The project uses the concept of remate and real-time mar creation by database access through instructions executed by browsers on the Internet. As a way of proving the system effectiveness, we present two study cases;.the first on a region named Maracajaú Coral Reefs, located in Rio Grande do Norte coast, and the second in the Switzerland Northeast in which we intended to promote the substitution of MapServer by the system proposed here. We also show some results that demonstrate the larger geographical data capability achieved by the use of the standardized codes and open source tools, such as Extensible Markup Language (XML), Document Object Model (DOM), script languages ECMAScript/ JavaScript, Hypertext Preprocessor (PHP) and PostgreSQL and its extension, PostGIS
Resumo:
The study shows an alternative solution to existing efforts at solving the problem of how to centrally manage and synchronise users’ Multiple Profiles (MP) across multiple discrete social networks. Most social network users hold more than one social network account and utilise them in different ways depending on the digital context (Iannella, 2009a). They may, for example, enjoy friendly chat on Facebook1, professional discussion on LinkedIn2, and health information exchange on PatientsLikeMe3 In this thesis the researcher proposes a framework for the management of a user’s multiple online social network profiles. A demonstrator, called Multiple Profile Manager (MPM), will be showcased to illustrate how effective the framework will be. The MPM will achieve the required profile management and synchronisation using a free, open, decentralized social networking platform (OSW) that was proposed by the Vodafone Group in 2010. The proposed MPM will enable a user to create and manage an integrated profile (IP) and share/synchronise this profile with all their social networks. The necessary protocols to support the prototype are also proposed by the researcher. The MPM protocol specification defines an Extensible Messaging and Presence Protocol (XMPP) extension for sharing vCard and social network accounts information between the MPM Server, MPM Client, and social network sites (SNSs). . Therefore many web users need to manage disparate profiles across many distributed online sources. Maintaining these profiles is cumbersome, time-consuming, inefficient, and may lead to lost opportunity. The writer of this thesis adopted a research approach and a number of use cases for the implementation of the project. The use cases were created to capture the functional requirements of the MPM and to describe the interactions between users and the MPM. In the research a development process was followed in establishing the prototype and related protocols. The use cases were subsequently used to illustrate the prototype via the screenshots taken of the MPM client interfaces. The use cases also played a role in evaluating the outcomes of the research such as the framework, prototype, and the related protocols. An innovative application of this project is in the area of public health informatics. The researcher utilised the prototype to examine how the framework might benefit patients and physicians. The framework can greatly enhance health information management for patients and more importantly offer a more comprehensive personal health overview of patients to physicians. This will give a more complete picture of the patient’s background than is currently available and will prove helpful in providing the right treatment. The MPM prototype and related protocols have a high application value as they can be integrated into the real OSW platform and so serve users in the modern digital world. They also provide online users with a real platform for centrally storing their complete profile data, efficiently managing their personal information, and moreover, synchronising the overall complete profile with each of their discrete profiles stored in their different social network sites.
Resumo:
The complex supply chain relations of the construction industry, coupled with the substantial amount of information to be shared on a regular basis between the parties involved, make the traditional paper-based data interchange methods inefficient, error prone and expensive. The successful information technology (IT) applications that enable seamless data interchange, such as the Electronic Data Interchange (EDI) systems, have generally failed to be successfully implemented in the construction industry. An alternative emerging technology, Extensible Markup Language (XML), and its applicability to streamline business processes and to improve data interchange methods within the construction industry are analysed, as is the EDI technology to identify the strategic advantages that XML technology provides to overcome the barriers to implementation. In addition, the successful implementation of XML-based automated data interchange platforms for a large organization, and the proposed benefits thereof, are presented as a case study.
Resumo:
Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.
Resumo:
Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.
Resumo:
Service-Oriented Architecture (SOA) and Web Services (WS) offer advanced flexibility and interoperability capabilities. However they imply significant performance overheads that need to be carefully considered. Supply Chain Management (SCM) and Traceability systems are an interesting domain for the use of WS technologies that are usually deemed to be too complex and unnecessary in practical applications, especially regarding security. This paper presents an externalized security architecture that uses the eXtensible Access Control Markup Language (XACML) authorization standard to enforce visibility restrictions on trace-ability data in a supply chain where multiple companies collaborate; the performance overheads are assessed by comparing 'raw' authorization implementations - Access Control Lists, Tokens, and RDF Assertions - with their XACML-equivalents. © 2012 IEEE.
Resumo:
在XACML(extensible access control markup language)和其管理性策略草案的基础上,针对目前XACML访问控制框架的特点,提出将XACML策略管理权限判定归结为利用委托策略对一个委托判定请求的判定,使用XML(extensible markup language)模式定义了此委托判定请求语法,描述了将策略管理请求规约为一个委托判定请求的过程,以及根据委托策略进行委托判定请求的判定过程,通过这种方法可以利用委托策略,对策略管理请求是否有效进行判断,从而实现基于扩展XACML的策略管理。
Resumo:
XML(extensible markup language)解析器是分析、处理XML文档的基础软件.研究高性能验证型XML解析器的实现.开发了支持3种解析模型的XML解析器OnceXMLParser,该解析器通过了严格的XML兼容性测试和API兼容性测试.OnceXMLParser具有轻量级体系结构并进行了多方面的性能优化,包括高效的词法分析、基于统计分析的自动机实现、合理的资源分配策略以及语言层次上的优化.性能测试结果表明,OnceXMLParser具有出色的解析性能.
Resumo:
讨论了在网络数据库环境中动态显示分子结构式的问题。数据库中存储分子的结构信息,在检索时动态生成分子结构式图,图形格式为SVG。SVG是一种矢量图形格式,又是一种基于XML的置标语言。用SVG可以动态创建分子结构式图形,而且文件是文本文件,体积小,易于编辑和信息交换。文中给出了从数据库中提取数据,动态生成分子结构式图形的方法。
Resumo:
XML e sua utilização em aplicações de bioinformática. Utilização de XML no módulo JPD do SMS. Discussão e trabalhos futuros.
Resumo:
Rule testing in transport scheduling is a complex and potentially costly business problem. This paper proposes an automated method for the rule-based testing of business rules using the extensible Markup Language for rule representation and transportation. A compiled approach to rule execution is also proposed for performance-critical scheduling systems.
Resumo:
Die vorliegende Diplomarbeit, befasst sich mit der Darstellung von TEI-Dokumenten im Content-Management-System „Drupal“. Dazu wird ein Modul entwickelt, welches das einfache Publizieren von Dokumenten in diesem, auf der Extensible Markup Language (XML) basierenden Format, in Drupal ermöglicht. Das Modul bietet eine Oberfläche zum Hochladen dieser Dokumente an und stellt zusätzlich Optionen bereit, die es ermöglichen die Darstellung der angezeigten Dokumente zu beeinflussen. Dabei ist es durch ein spezielles Menü möglich, Farben, Schriftgröße und -art festzulegen. Die Konvertierung der Dokumente geschieht per XSL Transformation und basiert auf dem Ergebnis eines vorangegangenen Projekts. Die Darstellung wird angereichert durch dynamische Elemente, wie Anmerkungen des Autors oder die Möglichkeit zur Umschaltung zwischen verschiedenen Textversionen wie z.B. einer originalen und einer korrigierten Fassung. Diese Funktion ist durch eine Werkzeugleiste zugänglich, die im unteren Bereich eingeblendet wird und es auch ermöglicht Seitenzahlen, die im Dokument als solche erkannt wurden, zu suchen oder direkt aufzurufen.
Resumo:
Das hier frei verfügbare Skript und die Sammlung an Klausuren mit Musterlösungen aus den Jahren 2006 bis 2015 geht auf die gleichnamige Vorlesung im Bachelorstudiengang Informatik an der Universität Kassel zurück, die von Prof. Dr. Wegner und ab 2012 von Dr. Schweinsberg angeboten wurde. Behandelt werden die Grundlagen der eXtensible Markup Language, die sich als Datenaustauschsprache etabliert hat. Im Gegensatz zu HTML erlaubt sie die semantische Anreicherung von Dokumenten. In der Vorlesung wird die Entwicklung von XML-basierten Sprachen sowie die Transformierung von XML-Dokumenten mittels Stylesheets (eXtensible Stylesheet Language XSL) behandelt. Ebenfalls werden die DOM-Schnittstelle (Document Object Model) und SAX (Simple API for XML) vorgestellt.
Resumo:
The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The architecture of the new system uses Java language as programming environment. Since application parameters and hardware in a joint experiment are complex with a large variability of components, requirements and specification solutions need to be flexible and modular, independent from operating system and computer architecture. To describe and organize the information on all the components and the connections among them, systems are developed using the extensible Markup Language (XML) technology. The communication between clients and servers uses remote procedure call (RPC) based on the XML (RPC-XML technology). The integration among Java language, XML and RPC-XML technologies allows to develop easily a standard data and communication access layer between users and laboratories using common software libraries and Web application. The libraries allow data retrieval using the same methods for all user laboratories in the joint collaboration, and the Web application allows a simple graphical user interface (GUI) access. The TCABR tokamak team in collaboration with the IPFN (Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Universidade Tecnica de Lisboa) is implementing this remote participation technologies. The first version was tested at the Joint Experiment on TCABR (TCABRJE), a Host Laboratory Experiment, organized in cooperation with the IAEA (International Atomic Energy Agency) in the framework of the IAEA Coordinated Research Project (CRP) on ""Joint Research Using Small Tokamaks"". (C) 2010 Elsevier B.V. All rights reserved.