995 resultados para XML Markup Language
Resumo:
I dagsläget kan WM-datas Fältmodul i deras MoveITS-system inte hantera kartdata i andra format än Shape. Fältmodulen är en TabletPC som kör operativsystemet Windows XP. Den kan användas för att redigera viss geografisk information som exempelvis skyltpositioner. Fältmodulen används av Stockholms Tekniska kontor för inventering av skyltstolpar.Stockholms Tekniska kontor ska börja leverera sina kartor i GML (Geography Markup Language). Men eftersom WM-datas Fältmodul inte klarar av det formatet skulle det här examensarbetet gå ut på att ta fram komponenter för hantering av det. Då det under examensarbetets gång har varit svårt att få tag i information runt GML har istället en större fokus blivit lagd på MIF (MapInfo Interchange Format). Eftersom det finns andra kommuner som använder MIF finns det intresse från WM-data att det tas fram komponenter även för detta format.Ett stort antal klasser har utvecklats för hantering av MIF-filer. Dessa klasser är helt utvecklade i C# och har gjorts under examensarbetets gång utifrån de specifikationer som finns för formatet från företaget MapInfo. För GML har det tagits fram information som kan ligga till grund för utveckling av komponenter för hantering av det formatet.
Resumo:
A modelagem conceitual de banco de dados geográficos (BDG) é um aspecto fundamental para o reuso, uma vez que a realidade geográfica é bastante complexa e, mais que isso, parte dela é utilizada recorrentemente na maioria dos projetos de BDG. A modelagem conceitual garante a independência da implementação do banco de dados e melhora a documentação do projeto, evitando que esta seja apenas um conjunto de documentos escritos no jargão da aplicação. Um modelo conceitual bem definido oferece uma representação canônica da realidade geográfica, possibilitando o reuso de subesquemas. Para a obtenção dos sub-esquemas a serem reutilizados, o processo de Descoberta de Conhecimento em Bancos de Dados (DCBD – KDD) pode ser aplicado. O resultado final do DCBD produz os chamados padrões de análise. No escopo deste trabalho os padrões de análise constituem os sub-esquemas reutilizáveis da modelagem conceitual de um banco de dados. O processo de DCBD possui várias etapas, desde a seleção e preparação de dados até a mineração e pós-processamento (análise dos resultados). Na preparação dos dados, um dos principais problemas a serem enfrentados é a possível heterogeneidade de dados. Neste trabalho, visto que os dados de entrada são os esquemas conceituais de BDG, e devido à inexistência de um padrão de modelagem de BDG largamente aceito, as heterogeneidades tendem a aumentar. A preparação dos dados deve integrar diferentes esquemas conceituais, baseados em diferentes modelos de dados e projetados por diferentes grupos, trabalhando autonomamente como uma comunidade distribuída. Para solucionar os conflitos entre esquemas conceituais foi desenvolvida uma metodologia, suportada por uma arquitetura de software, a qual divide a fase de préprocessamento em duas etapas, uma sintática e uma semântica. A fase sintática visa converter os esquemas em um formato canônico, a Geographic Markup Language (GML). Um número razoável de modelos de dados deve ser considerado, em conseqüência da inexistência de um modelo de dados largamente aceito como padrão para o projeto de BDG. Para cada um dos diferentes modelos de dados um conjunto de regras foi desenvolvido e um wrapper implementado. Para suportar a etapa semântica da integração uma ontologia é utilizada para integrar semanticamente os esquemas conceituais dos diferentes projetos. O algoritmo para consulta e atualização da base de conhecimento consiste em métodos matemáticos de medida de similaridade entre os conceitos. Uma vez os padrões de análise tendo sido identificados eles são armazenados em uma base de conhecimento que deve ser de fácil consulta e atualização. Novamente a ontologia pode ser utilizada como a base de conhecimento, armazenando os padrões de análise e possibilitando que projetistas a consultem durante a modelagem de suas aplicações. Os resultados da consulta ajudam a comparar o esquema conceitual em construção com soluções passadas, aceitas como corretas.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This paper introduces Java applet programs for a WWW (world wide web)-HTML (hypertext markup language)-based multimedia course in Power Electronics. The applet programs were developed with the purpose of providing an interactive visual simulation and analysis of idealized uncontrolled single-phase, and three-phase rectifiers. In addition, this paper discusses the development and utilization of JAVA applet programs to solve some design-oriented equations for rectifier applications. The major goal of these proposed JAVA applets was to provide more facilities for the students increase their pace in Power Electronics course, emphasizing waveforms analysis, and providing conditions for an on-line comparative analysis among different hands-on laboratory experiences, via a normal Internet TCP/IP connection. Therefore, using the proposed JAVA applets, which were embedded in a WWW-HTML-based course in Power Electronics, was observed an important improvement of the apprenticeship for the content of this course. Therefore, the course structure becomes fluid, allowing a true on-line course over the WWW, motivating students to learn its content, and apply it in some applications-oriented projects, and their home-works.
Resumo:
This paper presents Java applet programs for a WWW (world wide web)-HTML (hypertext markup language)-based multimedia course in basic power electronics circuits. These tools make use of the benefits of Java language to provide a dynamic and interactive approach to simulate steady-state idealized rectifiers (uncontrolled and controlled; single-phase and three-phase). In addition, this paper discusses the development and the use of the Java applet programs to assist the teaching of basics rectifier power electronics circuits, and to serve as a first design tool for basics power electronics circuits in the experiments of the laboratories. In order to validate the developed simulation applets, the results were confronted with results obtained from a well-know simulator package PSPICE. © 2005 IEEE.
Resumo:
In electronic commerce, systems development is based on two fundamental types of models, business models and process models. A business model is concerned with value exchanges among business partners, while a process model focuses on operational and procedural aspects of business communication. Thus, a business model defines the what in an e-commerce system, while a process model defines the how. Business process design can be facilitated and improved by a method for systematically moving from a business model to a process model. Such a method would provide support for traceability, evaluation of design alternatives, and seamless transition from analysis to realization. This work proposes a unified framework that can be used as a basis to analyze, to interpret and to understand different concepts associated at different stages in e-Commerce system development. In this thesis, we illustrate how UN/CEFACT’s recommended metamodels for business and process design can be analyzed, extended and then integrated for the final solutions based on the proposed unified framework. Also, as an application of the framework, we demonstrate how process-modeling tasks can be facilitated in e-Commerce system design. The proposed methodology, called BP3 stands for Business Process Patterns Perspective. The BP3 methodology uses a question-answer interface to capture different business requirements from the designers. It is based on pre-defined process patterns, and the final solution is generated by applying the captured business requirements by means of a set of production rules to complete the inter-process communication among these patterns.
Resumo:
This thesis proposes a new document model, according to which any document can be segmented in some independent components and transformed in a pattern-based projection, that only uses a very small set of objects and composition rules. The point is that such a normalized document expresses the same fundamental information of the original one, in a simple, clear and unambiguous way. The central part of my work consists of discussing that model, investigating how a digital document can be segmented, and how a segmented version can be used to implement advanced tools of conversion. I present seven patterns which are versatile enough to capture the most relevant documents’ structures, and whose minimality and rigour make that implementation possible. The abstract model is then instantiated into an actual markup language, called IML. IML is a general and extensible language, which basically adopts an XHTML syntax, able to capture a posteriori the only content of a digital document. It is compared with other languages and proposals, in order to clarify its role and objectives. Finally, I present some systems built upon these ideas. These applications are evaluated in terms of users’ advantages, workflow improvements and impact over the overall quality of the output. In particular, they cover heterogeneous content management processes: from web editing to collaboration (IsaWiki and WikiFactory), from e-learning (IsaLearning) to professional printing (IsaPress).
Resumo:
As distributed collaborative applications and architectures are adopting policy based management for tasks such as access control, network security and data privacy, the management and consolidation of a large number of policies is becoming a crucial component of such policy based systems. In large-scale distributed collaborative applications like web services, there is the need of analyzing policy interactions and integrating policies. In this thesis, we propose and implement EXAM-S, a comprehensive environment for policy analysis and management, which can be used to perform a variety of functions such as policy property analyses, policy similarity analysis, policy integration etc. As part of this environment, we have proposed and implemented new techniques for the analysis of policies that rely on a deep study of state of the art techniques. Moreover, we propose an approach for solving heterogeneity problems that usually arise when considering the analysis of policies belonging to different domains. Our work focuses on analysis of access control policies written in the dialect of XACML (Extensible Access Control Markup Language). We consider XACML policies because XACML is a rich language which can represent many policies of interest to real world applications and is gaining widespread adoption in the industry.
Resumo:
Il lavoro è stato suddiviso in tre macro-aree. Una prima riguardante un'analisi teorica di come funzionano le intrusioni, di quali software vengono utilizzati per compierle, e di come proteggersi (usando i dispositivi che in termine generico si possono riconoscere come i firewall). Una seconda macro-area che analizza un'intrusione avvenuta dall'esterno verso dei server sensibili di una rete LAN. Questa analisi viene condotta sui file catturati dalle due interfacce di rete configurate in modalità promiscua su una sonda presente nella LAN. Le interfacce sono due per potersi interfacciare a due segmenti di LAN aventi due maschere di sotto-rete differenti. L'attacco viene analizzato mediante vari software. Si può infatti definire una terza parte del lavoro, la parte dove vengono analizzati i file catturati dalle due interfacce con i software che prima si occupano di analizzare i dati di contenuto completo, come Wireshark, poi dei software che si occupano di analizzare i dati di sessione che sono stati trattati con Argus, e infine i dati di tipo statistico che sono stati trattati con Ntop. Il penultimo capitolo, quello prima delle conclusioni, invece tratta l'installazione di Nagios, e la sua configurazione per il monitoraggio attraverso plugin dello spazio di disco rimanente su una macchina agent remota, e sui servizi MySql e DNS. Ovviamente Nagios può essere configurato per monitorare ogni tipo di servizio offerto sulla rete.
Resumo:
For various reasons, it is important, if not essential, to integrate the computations and code used in data analyses, methodological descriptions, simulations, etc. with the documents that describe and rely on them. This integration allows readers to both verify and adapt the statements in the documents. Authors can easily reproduce them in the future, and they can present the document's contents in a different medium, e.g. with interactive controls. This paper describes a software framework for authoring and distributing these integrated, dynamic documents that contain text, code, data, and any auxiliary content needed to recreate the computations. The documents are dynamic in that the contents, including figures, tables, etc., can be recalculated each time a view of the document is generated. Our model treats a dynamic document as a master or ``source'' document from which one can generate different views in the form of traditional, derived documents for different audiences. We introduce the concept of a compendium as both a container for the different elements that make up the document and its computations (i.e. text, code, data, ...), and as a means for distributing, managing and updating the collection. The step from disseminating analyses via a compendium to reproducible research is a small one. By reproducible research, we mean research papers with accompanying software tools that allow the reader to directly reproduce the results and employ the methods that are presented in the research paper. Some of the issues involved in paradigms for the production, distribution and use of such reproducible research are discussed.
Resumo:
Aufbau einer föderativen Dienstlandschaft in der Ruhr-Region auf Basis von SAML mit dem Ziel eine organisationsübergreifende Nutzung von webbasierten IT-Diensten zu ermöglichen
Resumo:
La señalización digital o digital signage es una tecnología de comunicaciones digital que se está usando en los últimos años para reemplazar a la antigua publicidad impresa. Esta tecnología mejora la presentación y promoción de los productos anunciados, así como facilita el intercambio de información gracias a su colocación en lugares públicos o al aire libre. Las aplicaciones con las que cuenta este nuevo método de publicidad son muy variadas, ya que pueden variar desde ambientes privados en empresas, hasta lugares públicos como centros comerciales. Aunque la primera y principal utilidad de la señalización digital es la publicidad para que el usuario sienta una necesidad de adquirir productos, también la posibilidad de ofrecer más información sobre determinados artículos a través de las nuevas tecnologías es muy importante en este campo. La aplicación realizada en este proyecto es el desarrollo de un programa en Adobe Flash a través de lenguaje de programación XML. A través de una pantalla táctil, el usuario de un museo puede interactivamente acceder a un menú en el que aparecen los diferentes estilos de arte en un determinado tiempo de la historia. A través de una línea de tiempo se puede acceder a información sobre cada objeto que esté expuesto en la exhibición. Además se pueden observar imágenes de los detalles más importantes del objeto que no pueden ser vistos a simple vista, ya que no está permitido manipularlos. El empleo de la pantalla interactiva sirve para el usuario de la exhibición como una herramienta extra para recabar información sobre lo que está viendo, a través de una tecnología nueva y fácil de usar para todo el mundo, ya que solo se necesita usar las propias manos. La facilidad de manejo en aplicaciones como estas es muy importante, ya que el usuario final puede no tener conocimientos tecnológicos por lo que la información debe darse claramente. Como conclusión, se puede decir que digital signage es un mercado que está en expansión y que las empresas deben invertir en el desarrollo de contenidos, ya que las tecnologías avanzan aunque el digital signage no lo haga, y este sector podría ser muy útil en un futuro no muy lejano, ya que la información que es capaz de transmitir al espectador en todos los lugares es mucho más válida y útil que la proporcionada por un simple póster impreso en una valla publicitaria. Abstract The Digital signage is a digital communications technology being used in recent years to replace the old advertising printed. This technology improves the presentation and promotion of the advertised products, and makes easy the exchange of information with its placement in public places or outdoors. The applications that account this new method of advertising are several; they can range from private rooms in companies, to public places like malls. Although the first major utility of Digital signage is the advertising that makes the user feel and need of purchasing products. In addition, the chance of providing more information about certain items through new technologies is very important in this field. The application made in this project is the development of a program in Adobe Flash via XML programming language. Through a touch-screen, a museum user can interactively access a menu in which different styles of art in a particular time in history appears. Through a timeline you can access to information about each object that is exposed on display. Also you can see pictures of the most important details of the object that can not be seen with the naked eye, since it is not allowed to manipulate it. The use of the interactive screen serves to the user exhibition as an extra tool to gather information about what is seeing through a new technology and easy to use for everyone, since only need to use one’s own hands. The ease of handling in applications such as this is very important as the end user may not have expertise technology so the information should be clearly. As conclusion, one can say digital signage is an expansion market and companies must invest in content development, as although digital technologies advance digital signage does not, and this sector could be very useful in a near future, because information that is able of transmitting the everywhere viewer is much more valid and useful than that provided by a simple printed poster on a billboard.
Resumo:
This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.
Resumo:
INTAMAP is a web processing service for the automatic interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the open geospatial consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an open source solution. The system couples the 52-North web processing service, accepting data in the form of an observations and measurements (O&M) document with a computing back-end realized in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a new markup language to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropies and extreme values. In the light of the INTAMAP experience, we discuss the lessons learnt.
Resumo:
Models are central tools for modern scientists and decision makers, and there are many existing frameworks to support their creation, execution and composition. Many frameworks are based on proprietary interfaces, and do not lend themselves to the integration of models from diverse disciplines. Web based systems, or systems based on web services, such as Taverna and Kepler, allow composition of models based on standard web service technologies. At the same time the Open Geospatial Consortium has been developing their own service stack, which includes the Web Processing Service, designed to facilitate the executing of geospatial processing - including complex environmental models. The current Open Geospatial Consortium service stack employs Extensible Markup Language as a default data exchange standard, and widely-used encodings such as JavaScript Object Notation can often only be used when incorporated with Extensible Markup Language. Similarly, no successful engagement of the Web Processing Service standard with the well-supported technologies of Simple Object Access Protocol and Web Services Description Language has been seen. In this paper we propose a pure Simple Object Access Protocol/Web Services Description Language processing service which addresses some of the issues with the Web Processing Service specication and brings us closer to achieving a degree of interoperability between geospatial models, and thus realising the vision of a useful 'model web'.