977 resultados para XML Markup Language


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This report describes our attempt to add animation as another data type to be used on the World Wide Web. Our current network infrastructure, the Internet, is incapable of carrying video and audio streams for them to be used on the web for presentation purposes. In contrast, object-oriented animation proves to be efficient in terms of network resource requirements. We defined an animation model to support drawing-based and frame-based animation. We also extended the HyperText Markup Language in order to include this animation mode. BU-NCSA Mosanim, a modified version of the NCSA Mosaic for X(v2.5), is available to demonstrate the concept and potentials of animation in presentations an interactive game playing over the web.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Rule testing in transport scheduling is a complex and potentially costly business problem. This paper proposes an automated method for the rule-based testing of business rules using the extensible Markup Language for rule representation and transportation. A compiled approach to rule execution is also proposed for performance-critical scheduling systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Web 2.0 software in general and wikis in particular have been receiving growing attention as they constitute new and powerful tools, capable of supporting information sharing, creation of knowledge and a wide range of collaborative processes and learning activities. This paper introduces briefly some of the new opportunities made possible by Web 2.0 or the social Internet, focusing on those offered by the use of wikis as learning spaces. A wiki allows documents to be created, edited and shared on a group basis; it has a very easy and efficient markup language, using a simple Web browser. One of the most important characteristics of wiki technology is the ease with which pages are created and edited. The facility for wiki content to be edited by its users means that its pages and structure form a dynamic entity, in permanent evolution, where users can insert new ideas, supplement previously existing information and correct errors and typos in a document at any time, up to the agreed final version. This paper explores wikis as a collaborative learning and knowledge-building space and its potential for supporting Virtual Communities of Practice (VCoPs). In the academic years (2007/8 and 2008/9), students of the Business Intelligence module at the Master's programme of studies on Knowledge Management and Business Intelligence at Instituto Superior de Estatistica e Gestao de Informacao of the Universidade Nova de Lisboa, Portugal, have been actively involved in the creation of BIWiki - a wiki for Business Intelligence in the Portuguese language. Based on usage patterns and feedback from students participating in this experience, some conclusions are drawn regarding the potential of this technology to support the emergence of VCoPs; some provisional suggestions will be made regarding the use of wikis to support information sharing, knowledge creation and transfer and collaborative learning in Higher Education.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A través de la propuesta Styled Layer Descriptor 3D (SLD3D), cuya especificación se encuentra en el Open Geospatial Consortium (OGC), se posibilitará la generación de información tridimensional. La interpretación de estos estilos a través del Keyhole Markup Language (KML) y su representación en los interfaces 3D, garantizarán el aprovechamiento de la tercera componente espacial permitiendo una representación realista del entorno

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Una de las cuestiones que tradicionalmente se han tratado con los Sistemas de Información Geográfica (SIG) es la resolución de problemas de localización óptima de equipamientos. Sin embargo, se han encontrado deficiencias e insuficiencias en las funciones usuales de los SIG para el estudio de este tipo de asuntos. Por ese motivo, en la Universidad de Alcalá se desarrolló un software denominado Localiza (Bosque, Palm y Gómez, 2008), el cual está especializado en la localización de equipamientos sociales. Sin embargo, dicha aplicación está basada en el software Idrisi (Versión para Windows 2.0) y depende directamente de los formatos de datos de este SIG. Para solucionar este problema, se ha considerado la posibilidad de ofertar este tipo de software como servicio. La especificación Web Processing Service, del OGC brinda un marco para ofertar los modelos de localización-asignación como servicios a través de Internet. La implementación de estos modelos como servicios WPS facilitaría la interoperabilidad entre sistemas y la posibilidad de ejecutar modelos, independientemente de la plataforma y el lenguaje de programación. Esto permite obtener dicha funcionalidad tanto en entornos web como de escritorio. Además, se ha considerado la utilización de formatos de datos estandarizados como GML (Geography Markup Language), de tal forma que exista una independencia total de los formatos privativos de los SIG existentes en el mercado. Asimismo, se pretende utilizar en todo momento tecnologías y estándares abiertos

Relevância:

80.00% 80.00%

Publicador:

Resumo:

I dagsläget kan WM-datas Fältmodul i deras MoveITS-system inte hantera kartdata i andra format än Shape. Fältmodulen är en TabletPC som kör operativsystemet Windows XP. Den kan användas för att redigera viss geografisk information som exempelvis skyltpositioner. Fältmodulen används av Stockholms Tekniska kontor för inventering av skyltstolpar.Stockholms Tekniska kontor ska börja leverera sina kartor i GML (Geography Markup Language). Men eftersom WM-datas Fältmodul inte klarar av det formatet skulle det här examensarbetet gå ut på att ta fram komponenter för hantering av det. Då det under examensarbetets gång har varit svårt att få tag i information runt GML har istället en större fokus blivit lagd på MIF (MapInfo Interchange Format). Eftersom det finns andra kommuner som använder MIF finns det intresse från WM-data att det tas fram komponenter även för detta format.Ett stort antal klasser har utvecklats för hantering av MIF-filer. Dessa klasser är helt utvecklade i C# och har gjorts under examensarbetets gång utifrån de specifikationer som finns för formatet från företaget MapInfo. För GML har det tagits fram information som kan ligga till grund för utveckling av komponenter för hantering av det formatet.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A modelagem conceitual de banco de dados geográficos (BDG) é um aspecto fundamental para o reuso, uma vez que a realidade geográfica é bastante complexa e, mais que isso, parte dela é utilizada recorrentemente na maioria dos projetos de BDG. A modelagem conceitual garante a independência da implementação do banco de dados e melhora a documentação do projeto, evitando que esta seja apenas um conjunto de documentos escritos no jargão da aplicação. Um modelo conceitual bem definido oferece uma representação canônica da realidade geográfica, possibilitando o reuso de subesquemas. Para a obtenção dos sub-esquemas a serem reutilizados, o processo de Descoberta de Conhecimento em Bancos de Dados (DCBD – KDD) pode ser aplicado. O resultado final do DCBD produz os chamados padrões de análise. No escopo deste trabalho os padrões de análise constituem os sub-esquemas reutilizáveis da modelagem conceitual de um banco de dados. O processo de DCBD possui várias etapas, desde a seleção e preparação de dados até a mineração e pós-processamento (análise dos resultados). Na preparação dos dados, um dos principais problemas a serem enfrentados é a possível heterogeneidade de dados. Neste trabalho, visto que os dados de entrada são os esquemas conceituais de BDG, e devido à inexistência de um padrão de modelagem de BDG largamente aceito, as heterogeneidades tendem a aumentar. A preparação dos dados deve integrar diferentes esquemas conceituais, baseados em diferentes modelos de dados e projetados por diferentes grupos, trabalhando autonomamente como uma comunidade distribuída. Para solucionar os conflitos entre esquemas conceituais foi desenvolvida uma metodologia, suportada por uma arquitetura de software, a qual divide a fase de préprocessamento em duas etapas, uma sintática e uma semântica. A fase sintática visa converter os esquemas em um formato canônico, a Geographic Markup Language (GML). Um número razoável de modelos de dados deve ser considerado, em conseqüência da inexistência de um modelo de dados largamente aceito como padrão para o projeto de BDG. Para cada um dos diferentes modelos de dados um conjunto de regras foi desenvolvido e um wrapper implementado. Para suportar a etapa semântica da integração uma ontologia é utilizada para integrar semanticamente os esquemas conceituais dos diferentes projetos. O algoritmo para consulta e atualização da base de conhecimento consiste em métodos matemáticos de medida de similaridade entre os conceitos. Uma vez os padrões de análise tendo sido identificados eles são armazenados em uma base de conhecimento que deve ser de fácil consulta e atualização. Novamente a ontologia pode ser utilizada como a base de conhecimento, armazenando os padrões de análise e possibilitando que projetistas a consultem durante a modelagem de suas aplicações. Os resultados da consulta ajudam a comparar o esquema conceitual em construção com soluções passadas, aceitas como corretas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper introduces Java applet programs for a WWW (world wide web)-HTML (hypertext markup language)-based multimedia course in Power Electronics. The applet programs were developed with the purpose of providing an interactive visual simulation and analysis of idealized uncontrolled single-phase, and three-phase rectifiers. In addition, this paper discusses the development and utilization of JAVA applet programs to solve some design-oriented equations for rectifier applications. The major goal of these proposed JAVA applets was to provide more facilities for the students increase their pace in Power Electronics course, emphasizing waveforms analysis, and providing conditions for an on-line comparative analysis among different hands-on laboratory experiences, via a normal Internet TCP/IP connection. Therefore, using the proposed JAVA applets, which were embedded in a WWW-HTML-based course in Power Electronics, was observed an important improvement of the apprenticeship for the content of this course. Therefore, the course structure becomes fluid, allowing a true on-line course over the WWW, motivating students to learn its content, and apply it in some applications-oriented projects, and their home-works.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents Java applet programs for a WWW (world wide web)-HTML (hypertext markup language)-based multimedia course in basic power electronics circuits. These tools make use of the benefits of Java language to provide a dynamic and interactive approach to simulate steady-state idealized rectifiers (uncontrolled and controlled; single-phase and three-phase). In addition, this paper discusses the development and the use of the Java applet programs to assist the teaching of basics rectifier power electronics circuits, and to serve as a first design tool for basics power electronics circuits in the experiments of the laboratories. In order to validate the developed simulation applets, the results were confronted with results obtained from a well-know simulator package PSPICE. © 2005 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In electronic commerce, systems development is based on two fundamental types of models, business models and process models. A business model is concerned with value exchanges among business partners, while a process model focuses on operational and procedural aspects of business communication. Thus, a business model defines the what in an e-commerce system, while a process model defines the how. Business process design can be facilitated and improved by a method for systematically moving from a business model to a process model. Such a method would provide support for traceability, evaluation of design alternatives, and seamless transition from analysis to realization. This work proposes a unified framework that can be used as a basis to analyze, to interpret and to understand different concepts associated at different stages in e-Commerce system development. In this thesis, we illustrate how UN/CEFACT’s recommended metamodels for business and process design can be analyzed, extended and then integrated for the final solutions based on the proposed unified framework. Also, as an application of the framework, we demonstrate how process-modeling tasks can be facilitated in e-Commerce system design. The proposed methodology, called BP3 stands for Business Process Patterns Perspective. The BP3 methodology uses a question-answer interface to capture different business requirements from the designers. It is based on pre-defined process patterns, and the final solution is generated by applying the captured business requirements by means of a set of production rules to complete the inter-process communication among these patterns.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis proposes a new document model, according to which any document can be segmented in some independent components and transformed in a pattern-based projection, that only uses a very small set of objects and composition rules. The point is that such a normalized document expresses the same fundamental information of the original one, in a simple, clear and unambiguous way. The central part of my work consists of discussing that model, investigating how a digital document can be segmented, and how a segmented version can be used to implement advanced tools of conversion. I present seven patterns which are versatile enough to capture the most relevant documents’ structures, and whose minimality and rigour make that implementation possible. The abstract model is then instantiated into an actual markup language, called IML. IML is a general and extensible language, which basically adopts an XHTML syntax, able to capture a posteriori the only content of a digital document. It is compared with other languages and proposals, in order to clarify its role and objectives. Finally, I present some systems built upon these ideas. These applications are evaluated in terms of users’ advantages, workflow improvements and impact over the overall quality of the output. In particular, they cover heterogeneous content management processes: from web editing to collaboration (IsaWiki and WikiFactory), from e-learning (IsaLearning) to professional printing (IsaPress).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As distributed collaborative applications and architectures are adopting policy based management for tasks such as access control, network security and data privacy, the management and consolidation of a large number of policies is becoming a crucial component of such policy based systems. In large-scale distributed collaborative applications like web services, there is the need of analyzing policy interactions and integrating policies. In this thesis, we propose and implement EXAM-S, a comprehensive environment for policy analysis and management, which can be used to perform a variety of functions such as policy property analyses, policy similarity analysis, policy integration etc. As part of this environment, we have proposed and implemented new techniques for the analysis of policies that rely on a deep study of state of the art techniques. Moreover, we propose an approach for solving heterogeneity problems that usually arise when considering the analysis of policies belonging to different domains. Our work focuses on analysis of access control policies written in the dialect of XACML (Extensible Access Control Markup Language). We consider XACML policies because XACML is a rich language which can represent many policies of interest to real world applications and is gaining widespread adoption in the industry.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Il lavoro è stato suddiviso in tre macro-aree. Una prima riguardante un'analisi teorica di come funzionano le intrusioni, di quali software vengono utilizzati per compierle, e di come proteggersi (usando i dispositivi che in termine generico si possono riconoscere come i firewall). Una seconda macro-area che analizza un'intrusione avvenuta dall'esterno verso dei server sensibili di una rete LAN. Questa analisi viene condotta sui file catturati dalle due interfacce di rete configurate in modalità promiscua su una sonda presente nella LAN. Le interfacce sono due per potersi interfacciare a due segmenti di LAN aventi due maschere di sotto-rete differenti. L'attacco viene analizzato mediante vari software. Si può infatti definire una terza parte del lavoro, la parte dove vengono analizzati i file catturati dalle due interfacce con i software che prima si occupano di analizzare i dati di contenuto completo, come Wireshark, poi dei software che si occupano di analizzare i dati di sessione che sono stati trattati con Argus, e infine i dati di tipo statistico che sono stati trattati con Ntop. Il penultimo capitolo, quello prima delle conclusioni, invece tratta l'installazione di Nagios, e la sua configurazione per il monitoraggio attraverso plugin dello spazio di disco rimanente su una macchina agent remota, e sui servizi MySql e DNS. Ovviamente Nagios può essere configurato per monitorare ogni tipo di servizio offerto sulla rete.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

For various reasons, it is important, if not essential, to integrate the computations and code used in data analyses, methodological descriptions, simulations, etc. with the documents that describe and rely on them. This integration allows readers to both verify and adapt the statements in the documents. Authors can easily reproduce them in the future, and they can present the document's contents in a different medium, e.g. with interactive controls. This paper describes a software framework for authoring and distributing these integrated, dynamic documents that contain text, code, data, and any auxiliary content needed to recreate the computations. The documents are dynamic in that the contents, including figures, tables, etc., can be recalculated each time a view of the document is generated. Our model treats a dynamic document as a master or ``source'' document from which one can generate different views in the form of traditional, derived documents for different audiences. We introduce the concept of a compendium as both a container for the different elements that make up the document and its computations (i.e. text, code, data, ...), and as a means for distributing, managing and updating the collection. The step from disseminating analyses via a compendium to reproducible research is a small one. By reproducible research, we mean research papers with accompanying software tools that allow the reader to directly reproduce the results and employ the methods that are presented in the research paper. Some of the issues involved in paradigms for the production, distribution and use of such reproducible research are discussed.