906 resultados para Adaptive object model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

LOPES, Jose Soares Batista et al. Application of multivariable control using artificial neural networks in a debutanizer distillation column.In: INTERNATIONAL CONGRESS OF MECHANICAL ENGINEERING - COBEM, 19, 5-9 nov. 2007, Brasilia. Anais... Brasilia, 2007

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, we propose a Geographical Information System that can be used as a tool for the treatment and study of problems related with environmental and city management issues. It is based on the Scalable Vector Graphics (SVG) standard for Web development of graphics. The project uses the concept of remate and real-time mar creation by database access through instructions executed by browsers on the Internet. As a way of proving the system effectiveness, we present two study cases;.the first on a region named Maracajaú Coral Reefs, located in Rio Grande do Norte coast, and the second in the Switzerland Northeast in which we intended to promote the substitution of MapServer by the system proposed here. We also show some results that demonstrate the larger geographical data capability achieved by the use of the standardized codes and open source tools, such as Extensible Markup Language (XML), Document Object Model (DOM), script languages ECMAScript/ JavaScript, Hypertext Preprocessor (PHP) and PostgreSQL and its extension, PostGIS

Relevância:

80.00% 80.00%

Publicador:

Resumo:

[ES] Un servicio de urgencias de una zona ofrece asistencia sanitaria y tiene como principal objetivo atender la patología urgente que acude al hospital y el nivel de compromiso que se asume consiste en diagnosticar, tratar y estabilizar, en la medida posible, dicha patología urgente. Otro objetivo es gestionar la demanda de atención urgente por parte del ciudadano a través de un sistema de selección prioritaria inicial (Triaje) que selecciona, prioriza, organiza y gestiona la demanda de atención. Para poder controlar y realizar el trabajo de la forma más eficaz se utilizan herramientas de gestión necesarias para el control de los pacientes, desde que se realiza su ingreso en el servicio de urgencias hasta el alta del mismo. Las aplicaciones desarrolladas son las siguientes: Gestión de Pacientes en Urgencias: Esta aplicación asignará un estado inicial al paciente y permitirá ir cambiando el estado del mismo usando el método del Triaje (valoración), el más difundido en la medicina de urgencias. Además, se podrán solicitar pruebas diagnósticas y la visualización de marcadores de analíticas para comprobar su evolución. Finalmente, se podrá desarrollar un informe de alta para el paciente. Informadores de Urgencias: La aplicación gestiona la localización física del paciente dentro del servicio de urgencias, permitiendo asimismo el cambio entre las distintas localizaciones y el control para la información a los familiares de los mismos, pudiendo almacenar los familiares y teléfonos de contactos para que estos puedan ser informados. El desarrollo se ha realizado utilizando el MVC (modelo - vista - controlador) que es patrón de arquitectura que separa los datos de una aplicación, la interfaz gráfica de usuario y la lógica de control de componentes. El software utilizado para el desarrollo de las aplicaciones es CACHÉ de Intersystems que permite la creación de una base de datos multidimensional. El modelo de objetos de Caché se basa en el estándar ODMG (Object Database Management Group, Grupo de gestión de bases de datos de objetos) y soporta muchas características avanzadas. CACHÉ dispone de Zen, una biblioteca completa de componentes de objetos preconstruidos y herramientas de desarrollo basadas en la tecnología CSP (Caché Server Pages) y de objetos de InterSystems. ZEN es especialmente apropiado para desarrollar una versión Web de las aplicaciones cliente/servidor creadas originalmente con herramientas como Visual Basic o PowerBuilder.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Die vorliegende Dissertation analysiert die Middleware- Technologien CORBA (Common Object Request Broker Architecture), COM/DCOM (Component Object Model/Distributed Component Object Model), J2EE (Java-2-Enterprise Edition) und Web Services (inklusive .NET) auf ihre Eignung bzgl. eng und lose gekoppelten verteilten Anwendungen. Zusätzlich werden primär für CORBA die dynamischen CORBA-Komponenten DII (Dynamic Invocation Interface), IFR (Interface Repository) und die generischen Datentypen Any und DynAny (dynamisches Any) im Detail untersucht. Ziel ist es, a. konkrete Aussagen über diese Komponenten zu erzielen, und festzustellen, in welchem Umfeld diese generischen Ansätze ihre Berechtigung finden. b. das zeitliche Verhalten der dynamischen Komponenten bzgl. der Informationsgewinnung über die unbekannten Objekte zu analysieren. c. das zeitliche Verhalten der dynamischen Komponenten bzgl. ihrer Kommunikation zu messen. d. das zeitliche Verhalten bzgl. der Erzeugung von generischen Datentypen und das Einstellen von Daten zu messen und zu analysieren. e. das zeitliche Verhalten bzgl. des Erstellens von unbekannten, d. h. nicht in IDL beschriebenen Datentypen zur Laufzeit zu messen und zu analysieren. f. die Vorzüge/Nachteile der dynamischen Komponenten aufzuzeigen, ihre Einsatzgebiete zu definieren und mit anderen Technologien wie COM/DCOM, J2EE und den Web Services bzgl. ihrer Möglichkeiten zu vergleichen. g. Aussagen bzgl. enger und loser Koppelung zu tätigen. CORBA wird als standardisierte und vollständige Verteilungsplattform ausgewählt, um die o. a. Problemstellungen zu untersuchen. Bzgl. seines dynamischen Verhaltens, das zum Zeitpunkt dieser Ausarbeitung noch nicht oder nur unzureichend untersucht wurde, sind CORBA und die Web Services richtungsweisend bzgl. a. Arbeiten mit unbekannten Objekten. Dies kann durchaus Implikationen bzgl. der Entwicklung intelligenter Softwareagenten haben. b. der Integration von Legacy-Applikationen. c. der Möglichkeiten im Zusammenhang mit B2B (Business-to-Business). Diese Problemstellungen beinhalten auch allgemeine Fragen zum Marshalling/Unmarshalling von Daten und welche Aufwände hierfür notwendig sind, ebenso wie allgemeine Aussagen bzgl. der Echtzeitfähigkeit von CORBA-basierten, verteilten Anwendungen. Die Ergebnisse werden anschließend auf andere Technologien wie COM/DCOM, J2EE und den Web Services, soweit es zulässig ist, übertragen. Die Vergleiche CORBA mit DCOM, CORBA mit J2EE und CORBA mit Web Services zeigen im Detail die Eignung dieser Technologien bzgl. loser und enger Koppelung. Desweiteren werden aus den erzielten Resultaten allgemeine Konzepte bzgl. der Architektur und der Optimierung der Kommunikation abgeleitet. Diese Empfehlungen gelten uneingeschränkt für alle untersuchten Technologien im Zusammenhang mit verteilter Verarbeitung.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Software corpora facilitate reproducibility of analyses, however, static analysis for an entire corpus still requires considerable effort, often duplicated unnecessarily by multiple users. Moreover, most corpora are designed for single languages increasing the effort for cross-language analysis. To address these aspects we propose Pangea, an infrastructure allowing fast development of static analyses on multi-language corpora. Pangea uses language-independent meta-models stored as object model snapshots that can be directly loaded into memory and queried without any parsing overhead. To reduce the effort of performing static analyses, Pangea provides out-of-the box support for: creating and refining analyses in a dedicated environment, deploying an analysis on an entire corpus, using a runner that supports parallel execution, and exporting results in various formats. In this tool demonstration we introduce Pangea and provide several usage scenarios that illustrate how it reduces the cost of analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La implantación de las tecnologías Internet ha permitido la extensión del uso de estrategias e-manufacturing y el desarrollo de herramientas para la recopilación, transformación y sincronización de datos de fabricación vía web. En este ámbito, un área de potencial desarrollo es la extensión del virtual manufacturing a los procesos de Performance Management (PM), área crítica para la toma de decisiones y ejecución de acciones de mejora en fabricación. Este trabajo doctoral propone un Arquitectura de Información para el desarrollo de herramientas virtuales en el ámbito PM. Su aplicación permite asegurar la interoperabilidad necesaria en los procesos de tratamiento de información de toma de decisión. Está formado por tres sub-sistemas: un modelo conceptual, un modelo de objetos y un marco Web compuesto de una plataforma de información y una arquitectura de servicios Web (WS). El modelo conceptual y el modelo de objetos se basa en el desarrollo de toda la información que se necesita para definir y obtener los diferentes indicadores de medida que requieren los procesos PM. La plataforma de información hace uso de las tecnologías XML y B2MML para estructurar un nuevo conjunto de esquemas de mensajes de intercambio de medición de rendimiento (PMXML). Esta plataforma de información se complementa con una arquitectura de servicios web que hace uso de estos esquemas para integrar los procesos de codificación, decodificación, traducción y evaluación de los performance key indicators (KPI). Estos servicios realizan todas las transacciones que permiten transformar los datos origen en información inteligente usable en los procesos de toma de decisión. Un caso práctico de intercambio de datos en procesos de medición del área de mantenimiento de equipos es mostrado para verificar la utilidad de la arquitectura. ABSTRAC The implementation of Internet technologies has led to e-Manufacturing technologies becoming more widely used and to the development of tools for compiling, transforming and synchronizing manufacturing data through the Web. In this context, a potential area for development is the extension of virtual manufacturing to Performance Measurement (PM) processes, a critical area for decision-making and implementing improvement actions in manufacturing. This thesis proposes a Information Architecture to integrate decision support systems in e-manufacturing. Specifically, the proposed architecture offers a homogeneous PM information exchange model that can be applied trough decision support in emanufacturing environment. Its application improves the necessary interoperability in decision-making data processing tasks. It comprises three sub-systems: a data model, a object model and Web Framework which is composed by a PM information platform and PM-Web services architecture. . The data model and the object model are based on developing all the information required to define and acquire the different indicators required by PM processes. The PM information platform uses XML and B2MML technologies to structure a new set of performance measurement exchange message schemas (PM-XML). This PM information platform is complemented by a PM-Web Services architecture that uses these schemas to integrate the coding, decoding, translation and assessment processes of the key performance indicators (KPIs). These services perform all the transactions that enable the source data to be transformed into smart data that can be used in the decision-making processes. A practical example of data exchange for measurement processes in the area of equipment maintenance is shown to demonstrate the utility of the architecture.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Opportunities offered by high performance computing provide a significant degree of promise in the enhancement of the performance of real-time flood forecasting systems. In this paper, a real-time framework for probabilistic flood forecasting through data assimilation is presented. The distributed rainfall-runoff real-time interactive basin simulator (RIBS) model is selected to simulate the hydrological process in the basin. Although the RIBS model is deterministic, it is run in a probabilistic way through the results of calibration developed in a previous work performed by the authors that identifies the probability distribution functions that best characterise the most relevant model parameters. Adaptive techniques improve the result of flood forecasts because the model can be adapted to observations in real time as new information is available. The new adaptive forecast model based on genetic programming as a data assimilation technique is compared with the previously developed flood forecast model based on the calibration results. Both models are probabilistic as they generate an ensemble of hydrographs, taking the different uncertainties inherent in any forecast process into account. The Manzanares River basin was selected as a case study, with the process being computationally intensive as it requires simulation of many replicas of the ensemble in real time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabalho propõe uma técnica de modelagem multiescala concorrente do concreto considerando duas escalas distintas: a mesoescala, onde o concreto é modelado como um material heterogêneo, e a macroescala, na qual o concreto é tratado como um material homogêneo. A heterogeneidade da estrutura mesoscópica do concreto é idealizada considerando três fases distintas, compostas pelos agregados graúdos e argamassa (matriz), estes considerados materiais homogêneos, e zona de transição interfacial (ZTI), tratada como a parte mais fraca entre as três fases. O agregado graúdo é gerado a partir de uma curva granulométrica e posicionado na matriz de forma aleatória. Seu comportamento mecânico é descrito por um modelo constitutivo elástico-linear, devido a sua maior resistência quando comparado com as outras duas fases do concreto. Elementos finitos contínuos com alta relação de aspecto em conjunto com um modelo constitutivo de dano são usados para representar o comportamento não linear do concreto, decorrente da iniciação de fissuras na ZTI e posterior propagação para a matriz, dando lugar à formação de macrofissuras. Os elementos finitos de interface com alta relação de aspecto são inseridos entre todos os elementos regulares da matriz e entre os da matriz e agregados, representando a ZTI, tornando-se potenciais caminhos de propagação de fissuras. No estado limite, quando a espessura do elemento de interface tende a zero (h ?0) e, consequentemente, a relação de aspecto tende a infinito, estes elementos apresentam a mesma cinemática da aproximação contínua de descontinuidades fortes (ACDF), sendo apropriados para representar a formação de descontinuidades associados a fissuras, similar aos modelos coesivos. Um modelo de dano à tração é proposto para representar o comportamento mecânico não linear das interfaces, associado à formação de fissuras, ou até mesmo ao eventual fechamento destas. A fim de contornar os problemas causados pela malha de elementos finitos de transição entre as malhas da macro e da mesoescala, que, em geral, apresentam diferenças expressivas 5 de refinamento, utiliza-se uma técnica recente de acoplamento de malhas não conformes. Esta técnica é baseada na definição de elementos finitos de acoplamento (EFAs), os quais são capazes de estabelecer a continuidade de deslocamento entre malhas geradas de forma completamente independentes, sem aumentar a quantidade total de graus de liberdade do problema, podendo ser utilizados tanto para acoplar malhas não sobrepostas quanto sobrepostas. Para tornar possível a análise em multiescala em casos nos quais a região de localização de deformações não pode ser definida a priori, propõe-se uma técnica multiescala adaptativa. Nesta abordagem, usa-se a distribuição de tensões da escala macroscópica como um indicador para alterar a modelagem das regiões críticas, substituindo-se a macroescala pela mesoescala durante a análise. Consequentemente, a malha macroscópica é automaticamente substituída por uma malha mesoscópica, onde o comportamento não linear está na iminência de ocorrer. Testes numéricos são desenvolvidos para mostrar a capacidade do modelo proposto de representar o processo de iniciação e propagação de fissuras na região tracionada do concreto. Os resultados numéricos são comparados com os resultados experimentais ou com aqueles obtidos através da simulação direta em mesoescala (SDM).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper focuses on a problem of Grid system decomposition by developing its object model. Unified Modelling Language (UML) is used as a formalization tool. This approach is motivated by the complexity of the system being analysed and the need for simulation model design.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

* The research has been partially supported by INFRAWEBS - IST FP62003/IST/2.3.2.3 Research Project No. 511723 and “Technologies of the Information Society for Knowledge Processing and Management” - IIT-BAS Research Project No. 010061.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): H.5.2, H.2.8, J.2, H.5.3.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The increasing amount of available semistructured data demands efficient mechanisms to store, process, and search an enormous corpus of data to encourage its global adoption. Current techniques to store semistructured documents either map them to relational databases, or use a combination of flat files and indexes. These two approaches result in a mismatch between the tree-structure of semistructured data and the access characteristics of the underlying storage devices. Furthermore, the inefficiency of XML parsing methods has slowed down the large-scale adoption of XML into actual system implementations. The recent development of lazy parsing techniques is a major step towards improving this situation, but lazy parsers still have significant drawbacks that undermine the massive adoption of XML. Once the processing (storage and parsing) issues for semistructured data have been addressed, another key challenge to leverage semistructured data is to perform effective information discovery on such data. Previous works have addressed this problem in a generic (i.e. domain independent) way, but this process can be improved if knowledge about the specific domain is taken into consideration. This dissertation had two general goals: The first goal was to devise novel techniques to efficiently store and process semistructured documents. This goal had two specific aims: We proposed a method for storing semistructured documents that maps the physical characteristics of the documents to the geometrical layout of hard drives. We developed a Double-Lazy Parser for semistructured documents which introduces lazy behavior in both the pre-parsing and progressive parsing phases of the standard Document Object Model's parsing mechanism. The second goal was to construct a user-friendly and efficient engine for performing Information Discovery over domain-specific semistructured documents. This goal also had two aims: We presented a framework that exploits the domain-specific knowledge to improve the quality of the information discovery process by incorporating domain ontologies. We also proposed meaningful evaluation metrics to compare the results of search systems over semistructured documents.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper addresses issues related to the design of a graphical query mechanism that can act as an interface to any object-oriented database system (OODBS), in general, and the object model of ODMG 2.0, in particular. In the paper a brief literature survey of related work is given, and an analysis methodology that allows the evaluation of such languages is proposed. Moreover, the user's view level of a new graphical query language, namely GOQL (Graphical Object Query Language), for ODMG 2.0 is presented. The user's view level provides a graphical schema that does not contain any of the perplexing details of an object-oriented database schema, and it also provides a foundation for a graphical interface that can support ad-hoc queries for object-oriented database applications. We illustrate, using an example, the user's view level of GOQL

Relevância:

80.00% 80.00%

Publicador:

Resumo:

LOPES, Jose Soares Batista et al. Application of multivariable control using artificial neural networks in a debutanizer distillation column.In: INTERNATIONAL CONGRESS OF MECHANICAL ENGINEERING - COBEM, 19, 5-9 nov. 2007, Brasilia. Anais... Brasilia, 2007

Relevância:

80.00% 80.00%

Publicador:

Resumo:

LOPES, Jose Soares Batista et al. Application of multivariable control using artificial neural networks in a debutanizer distillation column.In: INTERNATIONAL CONGRESS OF MECHANICAL ENGINEERING - COBEM, 19, 5-9 nov. 2007, Brasilia. Anais... Brasilia, 2007