960 resultados para Document object model - DOM
Resumo:
All over the world, organizations are becoming more and more complex, and there’s a need to capture its complexity, so this is when the DEMO methodology (Design and Engineering Methodology for Organizations), created and developed by Jan L. G. Dietz, reaches its potential, which is to capture the structure of business processes in a coherent and consistent form of diagrams with their respective grammatical rules. The creation of WAMM (Wiki Aided Meta Modeling) platform was the main focus of this thesis, and had like principal precursor the idea to create a Meta-Editor that supports semantic data and uses MediaWiki. This prototype Meta-Editor uses MediaWiki as a receptor of data, and uses the ideas created in the Universal Enterprise Adaptive Object Model and the concept of Semantic Web, to create a platform that suits our needs, through Semantic MediaWiki, which helps the computer interconnect information and people in a more comprehensive, giving meaning to the content of the pages. The proposed Meta-Modeling platform allows the specification of the abstract syntax i.e., the grammar, and concrete syntax, e.g., symbols and connectors, of any language, as well as their model types and diagram types. We use the DEMO language as a proofof-concept and example. All such specifications are done in a coherent and formal way by the creation of semantic wiki pages and semantic properties connecting them.
Resumo:
LOPES, Jose Soares Batista et al. Application of multivariable control using artificial neural networks in a debutanizer distillation column.In: INTERNATIONAL CONGRESS OF MECHANICAL ENGINEERING - COBEM, 19, 5-9 nov. 2007, Brasilia. Anais... Brasilia, 2007
Resumo:
[ES] Un servicio de urgencias de una zona ofrece asistencia sanitaria y tiene como principal objetivo atender la patología urgente que acude al hospital y el nivel de compromiso que se asume consiste en diagnosticar, tratar y estabilizar, en la medida posible, dicha patología urgente. Otro objetivo es gestionar la demanda de atención urgente por parte del ciudadano a través de un sistema de selección prioritaria inicial (Triaje) que selecciona, prioriza, organiza y gestiona la demanda de atención. Para poder controlar y realizar el trabajo de la forma más eficaz se utilizan herramientas de gestión necesarias para el control de los pacientes, desde que se realiza su ingreso en el servicio de urgencias hasta el alta del mismo. Las aplicaciones desarrolladas son las siguientes: Gestión de Pacientes en Urgencias: Esta aplicación asignará un estado inicial al paciente y permitirá ir cambiando el estado del mismo usando el método del Triaje (valoración), el más difundido en la medicina de urgencias. Además, se podrán solicitar pruebas diagnósticas y la visualización de marcadores de analíticas para comprobar su evolución. Finalmente, se podrá desarrollar un informe de alta para el paciente. Informadores de Urgencias: La aplicación gestiona la localización física del paciente dentro del servicio de urgencias, permitiendo asimismo el cambio entre las distintas localizaciones y el control para la información a los familiares de los mismos, pudiendo almacenar los familiares y teléfonos de contactos para que estos puedan ser informados. El desarrollo se ha realizado utilizando el MVC (modelo - vista - controlador) que es patrón de arquitectura que separa los datos de una aplicación, la interfaz gráfica de usuario y la lógica de control de componentes. El software utilizado para el desarrollo de las aplicaciones es CACHÉ de Intersystems que permite la creación de una base de datos multidimensional. El modelo de objetos de Caché se basa en el estándar ODMG (Object Database Management Group, Grupo de gestión de bases de datos de objetos) y soporta muchas características avanzadas. CACHÉ dispone de Zen, una biblioteca completa de componentes de objetos preconstruidos y herramientas de desarrollo basadas en la tecnología CSP (Caché Server Pages) y de objetos de InterSystems. ZEN es especialmente apropiado para desarrollar una versión Web de las aplicaciones cliente/servidor creadas originalmente con herramientas como Visual Basic o PowerBuilder.
Resumo:
Die vorliegende Dissertation analysiert die Middleware- Technologien CORBA (Common Object Request Broker Architecture), COM/DCOM (Component Object Model/Distributed Component Object Model), J2EE (Java-2-Enterprise Edition) und Web Services (inklusive .NET) auf ihre Eignung bzgl. eng und lose gekoppelten verteilten Anwendungen. Zusätzlich werden primär für CORBA die dynamischen CORBA-Komponenten DII (Dynamic Invocation Interface), IFR (Interface Repository) und die generischen Datentypen Any und DynAny (dynamisches Any) im Detail untersucht. Ziel ist es, a. konkrete Aussagen über diese Komponenten zu erzielen, und festzustellen, in welchem Umfeld diese generischen Ansätze ihre Berechtigung finden. b. das zeitliche Verhalten der dynamischen Komponenten bzgl. der Informationsgewinnung über die unbekannten Objekte zu analysieren. c. das zeitliche Verhalten der dynamischen Komponenten bzgl. ihrer Kommunikation zu messen. d. das zeitliche Verhalten bzgl. der Erzeugung von generischen Datentypen und das Einstellen von Daten zu messen und zu analysieren. e. das zeitliche Verhalten bzgl. des Erstellens von unbekannten, d. h. nicht in IDL beschriebenen Datentypen zur Laufzeit zu messen und zu analysieren. f. die Vorzüge/Nachteile der dynamischen Komponenten aufzuzeigen, ihre Einsatzgebiete zu definieren und mit anderen Technologien wie COM/DCOM, J2EE und den Web Services bzgl. ihrer Möglichkeiten zu vergleichen. g. Aussagen bzgl. enger und loser Koppelung zu tätigen. CORBA wird als standardisierte und vollständige Verteilungsplattform ausgewählt, um die o. a. Problemstellungen zu untersuchen. Bzgl. seines dynamischen Verhaltens, das zum Zeitpunkt dieser Ausarbeitung noch nicht oder nur unzureichend untersucht wurde, sind CORBA und die Web Services richtungsweisend bzgl. a. Arbeiten mit unbekannten Objekten. Dies kann durchaus Implikationen bzgl. der Entwicklung intelligenter Softwareagenten haben. b. der Integration von Legacy-Applikationen. c. der Möglichkeiten im Zusammenhang mit B2B (Business-to-Business). Diese Problemstellungen beinhalten auch allgemeine Fragen zum Marshalling/Unmarshalling von Daten und welche Aufwände hierfür notwendig sind, ebenso wie allgemeine Aussagen bzgl. der Echtzeitfähigkeit von CORBA-basierten, verteilten Anwendungen. Die Ergebnisse werden anschließend auf andere Technologien wie COM/DCOM, J2EE und den Web Services, soweit es zulässig ist, übertragen. Die Vergleiche CORBA mit DCOM, CORBA mit J2EE und CORBA mit Web Services zeigen im Detail die Eignung dieser Technologien bzgl. loser und enger Koppelung. Desweiteren werden aus den erzielten Resultaten allgemeine Konzepte bzgl. der Architektur und der Optimierung der Kommunikation abgeleitet. Diese Empfehlungen gelten uneingeschränkt für alle untersuchten Technologien im Zusammenhang mit verteilter Verarbeitung.
Resumo:
Software corpora facilitate reproducibility of analyses, however, static analysis for an entire corpus still requires considerable effort, often duplicated unnecessarily by multiple users. Moreover, most corpora are designed for single languages increasing the effort for cross-language analysis. To address these aspects we propose Pangea, an infrastructure allowing fast development of static analyses on multi-language corpora. Pangea uses language-independent meta-models stored as object model snapshots that can be directly loaded into memory and queried without any parsing overhead. To reduce the effort of performing static analyses, Pangea provides out-of-the box support for: creating and refining analyses in a dedicated environment, deploying an analysis on an entire corpus, using a runner that supports parallel execution, and exporting results in various formats. In this tool demonstration we introduce Pangea and provide several usage scenarios that illustrate how it reduces the cost of analysis.
Resumo:
La implantación de las tecnologías Internet ha permitido la extensión del uso de estrategias e-manufacturing y el desarrollo de herramientas para la recopilación, transformación y sincronización de datos de fabricación vía web. En este ámbito, un área de potencial desarrollo es la extensión del virtual manufacturing a los procesos de Performance Management (PM), área crítica para la toma de decisiones y ejecución de acciones de mejora en fabricación. Este trabajo doctoral propone un Arquitectura de Información para el desarrollo de herramientas virtuales en el ámbito PM. Su aplicación permite asegurar la interoperabilidad necesaria en los procesos de tratamiento de información de toma de decisión. Está formado por tres sub-sistemas: un modelo conceptual, un modelo de objetos y un marco Web compuesto de una plataforma de información y una arquitectura de servicios Web (WS). El modelo conceptual y el modelo de objetos se basa en el desarrollo de toda la información que se necesita para definir y obtener los diferentes indicadores de medida que requieren los procesos PM. La plataforma de información hace uso de las tecnologías XML y B2MML para estructurar un nuevo conjunto de esquemas de mensajes de intercambio de medición de rendimiento (PMXML). Esta plataforma de información se complementa con una arquitectura de servicios web que hace uso de estos esquemas para integrar los procesos de codificación, decodificación, traducción y evaluación de los performance key indicators (KPI). Estos servicios realizan todas las transacciones que permiten transformar los datos origen en información inteligente usable en los procesos de toma de decisión. Un caso práctico de intercambio de datos en procesos de medición del área de mantenimiento de equipos es mostrado para verificar la utilidad de la arquitectura. ABSTRAC The implementation of Internet technologies has led to e-Manufacturing technologies becoming more widely used and to the development of tools for compiling, transforming and synchronizing manufacturing data through the Web. In this context, a potential area for development is the extension of virtual manufacturing to Performance Measurement (PM) processes, a critical area for decision-making and implementing improvement actions in manufacturing. This thesis proposes a Information Architecture to integrate decision support systems in e-manufacturing. Specifically, the proposed architecture offers a homogeneous PM information exchange model that can be applied trough decision support in emanufacturing environment. Its application improves the necessary interoperability in decision-making data processing tasks. It comprises three sub-systems: a data model, a object model and Web Framework which is composed by a PM information platform and PM-Web services architecture. . The data model and the object model are based on developing all the information required to define and acquire the different indicators required by PM processes. The PM information platform uses XML and B2MML technologies to structure a new set of performance measurement exchange message schemas (PM-XML). This PM information platform is complemented by a PM-Web Services architecture that uses these schemas to integrate the coding, decoding, translation and assessment processes of the key performance indicators (KPIs). These services perform all the transactions that enable the source data to be transformed into smart data that can be used in the decision-making processes. A practical example of data exchange for measurement processes in the area of equipment maintenance is shown to demonstrate the utility of the architecture.
Resumo:
This paper focuses on a problem of Grid system decomposition by developing its object model. Unified Modelling Language (UML) is used as a formalization tool. This approach is motivated by the complexity of the system being analysed and the need for simulation model design.
Resumo:
* The research has been partially supported by INFRAWEBS - IST FP62003/IST/2.3.2.3 Research Project No. 511723 and “Technologies of the Information Society for Knowledge Processing and Management” - IIT-BAS Research Project No. 010061.
Resumo:
ACM Computing Classification System (1998): H.5.2, H.2.8, J.2, H.5.3.
Resumo:
The increasing amount of available semistructured data demands efficient mechanisms to store, process, and search an enormous corpus of data to encourage its global adoption. Current techniques to store semistructured documents either map them to relational databases, or use a combination of flat files and indexes. These two approaches result in a mismatch between the tree-structure of semistructured data and the access characteristics of the underlying storage devices. Furthermore, the inefficiency of XML parsing methods has slowed down the large-scale adoption of XML into actual system implementations. The recent development of lazy parsing techniques is a major step towards improving this situation, but lazy parsers still have significant drawbacks that undermine the massive adoption of XML. ^ Once the processing (storage and parsing) issues for semistructured data have been addressed, another key challenge to leverage semistructured data is to perform effective information discovery on such data. Previous works have addressed this problem in a generic (i.e. domain independent) way, but this process can be improved if knowledge about the specific domain is taken into consideration. ^ This dissertation had two general goals: The first goal was to devise novel techniques to efficiently store and process semistructured documents. This goal had two specific aims: We proposed a method for storing semistructured documents that maps the physical characteristics of the documents to the geometrical layout of hard drives. We developed a Double-Lazy Parser for semistructured documents which introduces lazy behavior in both the pre-parsing and progressive parsing phases of the standard Document Object Model’s parsing mechanism. ^ The second goal was to construct a user-friendly and efficient engine for performing Information Discovery over domain-specific semistructured documents. This goal also had two aims: We presented a framework that exploits the domain-specific knowledge to improve the quality of the information discovery process by incorporating domain ontologies. We also proposed meaningful evaluation metrics to compare the results of search systems over semistructured documents. ^
Resumo:
The paper addresses issues related to the design of a graphical query mechanism that can act as an interface to any object-oriented database system (OODBS), in general, and the object model of ODMG 2.0, in particular. In the paper a brief literature survey of related work is given, and an analysis methodology that allows the evaluation of such languages is proposed. Moreover, the user's view level of a new graphical query language, namely GOQL (Graphical Object Query Language), for ODMG 2.0 is presented. The user's view level provides a graphical schema that does not contain any of the perplexing details of an object-oriented database schema, and it also provides a foundation for a graphical interface that can support ad-hoc queries for object-oriented database applications. We illustrate, using an example, the user's view level of GOQL
Resumo:
LOPES, Jose Soares Batista et al. Application of multivariable control using artificial neural networks in a debutanizer distillation column.In: INTERNATIONAL CONGRESS OF MECHANICAL ENGINEERING - COBEM, 19, 5-9 nov. 2007, Brasilia. Anais... Brasilia, 2007
Resumo:
LOPES, Jose Soares Batista et al. Application of multivariable control using artificial neural networks in a debutanizer distillation column.In: INTERNATIONAL CONGRESS OF MECHANICAL ENGINEERING - COBEM, 19, 5-9 nov. 2007, Brasilia. Anais... Brasilia, 2007
Resumo:
Visual recognition is a fundamental research topic in computer vision. This dissertation explores datasets, features, learning, and models used for visual recognition. In order to train visual models and evaluate different recognition algorithms, this dissertation develops an approach to collect object image datasets on web pages using an analysis of text around the image and of image appearance. This method exploits established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for images). The resources provide rich text and object appearance information. This dissertation describes results on two datasets. The first is Berg’s collection of 10 animal categories; on this dataset, we significantly outperform previous approaches. On an additional set of 5 categories, experimental results show the effectiveness of the method. Images are represented as features for visual recognition. This dissertation introduces a text-based image feature and demonstrates that it consistently improves performance on hard object classification problems. The feature is built using an auxiliary dataset of images annotated with tags, downloaded from the Internet. Image tags are noisy. The method obtains the text features of an unannotated image from the tags of its k-nearest neighbors in this auxiliary collection. A visual classifier presented with an object viewed under novel circumstances (say, a new viewing direction) must rely on its visual examples. This text feature may not change, because the auxiliary dataset likely contains a similar picture. While the tags associated with images are noisy, they are more stable when appearance changes. The performance of this feature is tested using PASCAL VOC 2006 and 2007 datasets. This feature performs well; it consistently improves the performance of visual object classifiers, and is particularly effective when the training dataset is small. With more and more collected training data, computational cost becomes a bottleneck, especially when training sophisticated classifiers such as kernelized SVM. This dissertation proposes a fast training algorithm called Stochastic Intersection Kernel Machine (SIKMA). This proposed training method will be useful for many vision problems, as it can produce a kernel classifier that is more accurate than a linear classifier, and can be trained on tens of thousands of examples in two minutes. It processes training examples one by one in a sequence, so memory cost is no longer the bottleneck to process large scale datasets. This dissertation applies this approach to train classifiers of Flickr groups with many group training examples. The resulting Flickr group prediction scores can be used to measure image similarity between two images. Experimental results on the Corel dataset and a PASCAL VOC dataset show the learned Flickr features perform better on image matching, retrieval, and classification than conventional visual features. Visual models are usually trained to best separate positive and negative training examples. However, when recognizing a large number of object categories, there may not be enough training examples for most objects, due to the intrinsic long-tailed distribution of objects in the real world. This dissertation proposes an approach to use comparative object similarity. The key insight is that, given a set of object categories which are similar and a set of categories which are dissimilar, a good object model should respond more strongly to examples from similar categories than to examples from dissimilar categories. This dissertation develops a regularized kernel machine algorithm to use this category dependent similarity regularization. Experiments on hundreds of categories show that our method can make significant improvement for categories with few or even no positive examples.
Resumo:
Background: Understanding transcriptional regulation by genome-wide microarray studies can contribute to unravel complex relationships between genes. Attempts to standardize the annotation of microarray data include the Minimum Information About a Microarray Experiment (MIAME) recommendations, the MAGE-ML format for data interchange, and the use of controlled vocabularies or ontologies. The existing software systems for microarray data analysis implement the mentioned standards only partially and are often hard to use and extend. Integration of genomic annotation data and other sources of external knowledge using open standards is therefore a key requirement for future integrated analysis systems. Results: The EMMA 2 software has been designed to resolve shortcomings with respect to full MAGE-ML and ontology support and makes use of modern data integration techniques. We present a software system that features comprehensive data analysis functions for spotted arrays, and for the most common synthesized oligo arrays such as Agilent, Affymetrix and NimbleGen. The system is based on the full MAGE object model. Analysis functionality is based on R and Bioconductor packages and can make use of a compute cluster for distributed services. Conclusion: Our model-driven approach for automatically implementing a full MAGE object model provides high flexibility and compatibility. Data integration via SOAP-based web-services is advantageous in a distributed client-server environment as the collaborative analysis of microarray data is gaining more and more relevance in international research consortia. The adequacy of the EMMA 2 software design and implementation has been proven by its application in many distributed functional genomics projects. Its scalability makes the current architecture suited for extensions towards future transcriptomics methods based on high-throughput sequencing approaches which have much higher computational requirements than microarrays.