274 resultados para Arquiteturas SMT
Resumo:
Los tipos de datos concurrentes son implementaciones concurrentes de las abstracciones de datos clásicas, con la diferencia de que han sido específicamente diseñados para aprovechar el gran paralelismo disponible en las modernas arquitecturas multiprocesador y multinúcleo. La correcta manipulación de los tipos de datos concurrentes resulta esencial para demostrar la completa corrección de los sistemas de software que los utilizan. Una de las mayores dificultades a la hora de diseñar y verificar tipos de datos concurrentes surge de la necesidad de tener que razonar acerca de un número arbitrario de procesos que invocan estos tipos de datos de manera concurrente. Esto requiere considerar sistemas parametrizados. En este trabajo estudiamos la verificación formal de propiedades temporales de sistemas concurrentes parametrizados, poniendo especial énfasis en programas que manipulan estructuras de datos concurrentes. La principal dificultad a la hora de razonar acerca de sistemas concurrentes parametrizados proviene de la interacción entre el gran nivel de concurrencia que éstos poseen y la necesidad de razonar al mismo tiempo acerca de la memoria dinámica. La verificación de sistemas parametrizados resulta en sí un problema desafiante debido a que requiere razonar acerca de estructuras de datos complejas que son accedidas y modificadas por un numero ilimitado de procesos que manipulan de manera simultánea el contenido de la memoria dinámica empleando métodos de sincronización poco estructurados. En este trabajo, presentamos un marco formal basado en métodos deductivos capaz de ocuparse de la verificación de propiedades de safety y liveness de sistemas concurrentes parametrizados que manejan estructuras de datos complejas. Nuestro marco formal incluye reglas de prueba y técnicas especialmente adaptadas para sistemas parametrizados, las cuales trabajan en colaboración con procedimientos de decisión especialmente diseñados para analizar complejas estructuras de datos concurrentes. Un aspecto novedoso de nuestro marco formal es que efectúa una clara diferenciación entre el análisis del flujo de control del programa y el análisis de los datos que se manejan. El flujo de control del programa se analiza utilizando reglas de prueba y técnicas de verificación deductivas especialmente diseñadas para lidiar con sistemas parametrizados. Comenzando a partir de un programa concurrente y la especificación de una propiedad temporal, nuestras técnicas deductivas son capaces de generar un conjunto finito de condiciones de verificación cuya validez implican la satisfacción de dicha especificación temporal por parte de cualquier sistema, sin importar el número de procesos que formen parte del sistema. Las condiciones de verificación generadas se corresponden con los datos manipulados. Estudiamos el diseño de procedimientos de decisión especializados capaces de lidiar con estas condiciones de verificación de manera completamente automática. Investigamos teorías decidibles capaces de describir propiedades de tipos de datos complejos que manipulan punteros, tales como implementaciones imperativas de pilas, colas, listas y skiplists. Para cada una de estas teorías presentamos un procedimiento de decisión y una implementación práctica construida sobre SMT solvers. Estos procedimientos de decisión son finalmente utilizados para verificar de manera automática las condiciones de verificación generadas por nuestras técnicas de verificación parametrizada. Para concluir, demostramos como utilizando nuestro marco formal es posible probar no solo propiedades de safety sino además de liveness en algunas versiones de protocolos de exclusión mutua y programas que manipulan estructuras de datos concurrentes. El enfoque que presentamos en este trabajo resulta ser muy general y puede ser aplicado para verificar un amplio rango de tipos de datos concurrentes similares. Abstract Concurrent data types are concurrent implementations of classical data abstractions, specifically designed to exploit the great deal of parallelism available in modern multiprocessor and multi-core architectures. The correct manipulation of concurrent data types is essential for the overall correctness of the software system built using them. A major difficulty in designing and verifying concurrent data types arises by the need to reason about any number of threads invoking the data type simultaneously, which requires considering parametrized systems. In this work we study the formal verification of temporal properties of parametrized concurrent systems, with a special focus on programs that manipulate concurrent data structures. The main difficulty to reason about concurrent parametrized systems comes from the combination of their inherently high concurrency and the manipulation of dynamic memory. This parametrized verification problem is very challenging, because it requires to reason about complex concurrent data structures being accessed and modified by threads which simultaneously manipulate the heap using unstructured synchronization methods. In this work, we present a formal framework based on deductive methods which is capable of dealing with the verification of safety and liveness properties of concurrent parametrized systems that manipulate complex data structures. Our framework includes special proof rules and techniques adapted for parametrized systems which work in collaboration with specialized decision procedures for complex data structures. A novel aspect of our framework is that it cleanly differentiates the analysis of the program control flow from the analysis of the data being manipulated. The program control flow is analyzed using deductive proof rules and verification techniques specifically designed for coping with parametrized systems. Starting from a concurrent program and a temporal specification, our techniques generate a finite collection of verification conditions whose validity entails the satisfaction of the temporal specification by any client system, in spite of the number of threads. The verification conditions correspond to the data manipulation. We study the design of specialized decision procedures to deal with these verification conditions fully automatically. We investigate decidable theories capable of describing rich properties of complex pointer based data types such as stacks, queues, lists and skiplists. For each of these theories we present a decision procedure, and its practical implementation on top of existing SMT solvers. These decision procedures are ultimately used for automatically verifying the verification conditions generated by our specialized parametrized verification techniques. Finally, we show how using our framework it is possible to prove not only safety but also liveness properties of concurrent versions of some mutual exclusion protocols and programs that manipulate concurrent data structures. The approach we present in this work is very general, and can be applied to verify a wide range of similar concurrent data types.
Resumo:
ORF slr0798, now designated ziaA, from Synechocystis PCC 6803 encodes a polypeptide with sequence features of heavy metal transporting P-type ATPases. Increased Zn2+ tolerance and reduced 65Zn accumulation was observed in Synechococcus PCC 7942, strain R2-PIM8(smt), containing ziaA and upstream regulatory sequences, compared with control cells. Conversely, reduced Zn2+ tolerance was observed following disruption of ziaA in Synechocystis PCC 6803, and ziaA-mediated restoration of Zn2+ tolerance has subsequently been used as a selectable marker for transformation. Nucleotide sequences upstream of ziaA, fused to a promoterless lacZ gene, conferred Zn2+-dependent β-galactosidase activity when introduced into R2-PIM8(smt). The product of ORF sll0792, designated ZiaR, is a Zn2+-responsive repressor of ziaA transcription. Reporter gene constructs lacking ziaR conferred elevated Zn2+-independent expression from the ziaA operator–promoter in R2-PIM8(smt). Gel retardation assays detected ZiaR-dependent complexes forming with the zia operator–promoter and ZiaR–DNA binding was enhanced by treatment with a metal-chelator in vitro. Two mutants of ZiaR (C71S/C73S and H116R) bound to, and repressed expression from, the ziaA operator–promoter but were unable to sense Zn2+. Metal coordination to His-imidazole and Cys-thiolate ligands at these residues of ZiaR is thus implicated in Zn2+-perception by Synechocystis PCC 6803.
Resumo:
O paradigma das redes em chip (NoCs) surgiu a fim de permitir alto grau de integração entre vários núcleos de sistemas em chip (SoCs), cuja comunicação é tradicionalmente baseada em barramentos. As NoCs são definidas como uma estrutura de switches e canais ponto a ponto que interconectam núcleos de propriedades intelectuais (IPs) de um SoC, provendo uma plataforma de comunicação entre os mesmos. As redes em chip sem fio (WiNoCs) são uma abordagem evolucionária do conceito de rede em chip (NoC), a qual possibilita a adoção dos mecanismos de roteamento das NoCs com o uso de tecnologias sem fio, propondo a otimização dos fluxos de tráfego, a redução de conectores e a atuação em conjunto com as NoCs tradicionais, reduzindo a carga nos barramentos. O uso do roteamento dinâmico dentro das redes em chip sem fio permite o desligamento seletivo de partes do hardware, o que reduz a energia consumida. Contudo, a escolha de onde empregar um link sem fio em uma NoC é uma tarefa complexa, dado que os nós são pontes de tráfego os quais não podem ser desligados sem potencialmente quebrar uma rota preestabelecida. Além de fornecer uma visão sobre as arquiteturas de NoCs e do estado da arte do paradigma emergente de WiNoC, este trabalho também propõe um método de avaliação baseado no já consolidado simulador ns-2, cujo objetivo é testar cenários híbridos de NoC e WiNoC. A partir desta abordagem é possível avaliar diferentes parâmetros das WiNoCs associados a aspectos de roteamento, aplicação e número de nós envolvidos em redes hierárquicas. Por meio da análise de tais simulações também é possível investigar qual estratégia de roteamento é mais recomendada para um determinado cenário de utilização, o que é relevante ao se escolher a disposição espacial dos nós em uma NoC. Os experimentos realizados são o estudo da dinâmica de funcionamento dos protocolos ad hoc de roteamento sem fio em uma topologia hierárquica de WiNoC, seguido da análise de tamanho da rede e dos padrões de tráfego na WiNoC.
Resumo:
Statistical machine translation (SMT) is an approach to Machine Translation (MT) that uses statistical models whose parameter estimation is based on the analysis of existing human translations (contained in bilingual corpora). From a translation student’s standpoint, this dissertation aims to explain how a phrase-based SMT system works, to determine the role of the statistical models it uses in the translation process and to assess the quality of the translations provided that system is trained with in-domain goodquality corpora. To that end, a phrase-based SMT system based on Moses has been trained and subsequently used for the English to Spanish translation of two texts related in topic to the training data. Finally, the quality of this output texts produced by the system has been assessed through a quantitative evaluation carried out with three different automatic evaluation measures and a qualitative evaluation based on the Multidimensional Quality Metrics (MQM).
Resumo:
Beaucoup de patients atteints de la maladie de Parkinson (MP) peuvent souffrir de troubles cognitifs dès les étapes initiales de la maladie et jusqu’à 80% d’entre eux vont développer une démence. Des altérations fonctionnelles au niveau du cortex préfrontal dorsolatéral (CPFDL), possiblement en relation avec le noyau caudé, seraient à l’origine de certains de ces déficits cognitifs. Des résultats antérieurs de notre groupe ont montré une augmentation de l’activité et de la connectivité dans la boucle cortico-striatale cognitive suite à la stimulation magnétique transcrânienne (SMT) utilisant des paramètres « theta burst » intermittent (iTBS) sur le CPFDL gauche. Pour cette étude, 24 patients atteints de la MP avec des troubles cognitifs ont été séparées en 2 groupes : le groupe iTBS active (N=15) et le groupe sham (stimulation simulée, N=9). Une batterie neuropsychologique détaillée évaluant cinq domaines cognitifs (attention, fonctions exécutives, langage, mémoire et habiletés visuo-spatiales) a été administrée lors des jours 1, 8, 17 et 37. Le protocole iTBS a été appliqué sur le CPFDL gauche durant les jours 2, 4 et 7. Les scores z ont été calculés pour chaque domaine cognitif et pour la cognition globale. Les résultats ont montré une augmentation significative de la cognition globale jusqu’à 10 jours suivant l’iTBS active, particulièrement au niveau de l’attention, des fonctions exécutives et des habiletés visuo-spatiales. Cet effet sur la cognition globale n’est pas répliqué dans le groupe sham. Ces résultats suggèrent donc que l’iTBS peut moduler la performance cognitive chez les patients atteints de MP avec des déficits cognitifs.
Resumo:
The interaction between fluid seepage, bottom water redox, and chemosynthetic communities was studied at cold seeps across one of the world's largest oxygen minimum zones (OMZ) located at the Makran convergent continental margin. Push cores were obtained from seeps within and below the core-OMZ with a remotely operated vehicle. Extracted sediment pore water was analyzed for sulfide and sulfate concentrations. Depending on oxygen availability in the bottom water, seeps were either colonized by microbial mats or by mats and macrofauna. The latter, including ampharetid polychaetes and vesicomyid clams, occurred in distinct benthic habitats, which were arranged in a concentric fashion around gas orifices. At most sites colonized by microbial mats, hydrogen sulfide was exported into the bottom water. Where macrofauna was widely abundant, hydrogen sulfide was retained within the sediment. Numerical modeling of pore water profiles was performed in order to assess rates of fluid advection and bioirrigation. While the magnitude of upward fluid flow decreased from 11 cm yr**-1 to <1 cm yr**-1 and the sulfate/methane transition (SMT) deepened with increasing distance from the central gas orifice, the fluxes of sulfate into the SMT did not significantly differ (6.6-9.3 mol m**-2 yr**-1). Depth-integrated rates of bioirrigation increased from 120 cm yr**-1 in the central habitat, characterized by microbial mats and sparse macrofauna, to 297 cm yr**-1 in the habitat of large and few small vesicomyid clams. These results reveal that chemosynthetic macrofauna inhabiting the outer seep habitats below the core-OMZ efficiently bioirrigate and thus transport sulfate down into the upper 10 to 15 cm of the sediment. In this way the animals deal with the lower upward flux of methane in outer habitats by stimulating rates of anaerobic oxidation of methane (AOM) with sulfate high enough to provide hydrogen sulfide for chemosynthesis. Through bioirrigation, macrofauna engineer their geochemical environment and fuel upward sulfide flux via AOM. Furthermore, due to the introduction of oxygenated bottom water into the sediment via bioirrigation, the depth of the sulfide sink gradually deepens towards outer habitats. We therefore suggest that - in addition to the oxygen levels in the water column, which determine whether macrofaunal communities can develop or not - it is the depth of the SMT and thus of sulfide production that determines which chemosynthetic communities are able to exploit the sulfide at depth. We hypothesize that large vesicomyid clams, by efficiently expanding the sulfate zone down into the sediment, could cut off smaller or less mobile organisms, as e.g. small clams and sulfur bacteria, from the sulfide source.
Resumo:
The paper develops a set of ideas and techniques supporting analogical reasoning throughout the life-cycle of terrorist acts. Implementation of these ideas and techniques can enhance the intellectual level of computer-based systems for a wide range of personnel dealing with various aspects of the problem of terrorism and its effects. The method combines techniques of structure-sensitive distributed representations in the framework of Associative-Projective Neural Networks, and knowledge obtained through the progress in analogical reasoning, in particular the Structure Mapping Theory. The impact of these analogical reasoning tools on the efforts to minimize the effects of terrorist acts on civilian population is expected by facilitating knowledge acquisition and formation of terrorism-related knowledge bases, as well as supporting the processes of analysis, decision making, and reasoning with those knowledge bases for users at various levels of expertise before, during, and after terrorist acts.
Resumo:
This work deals with the research and development of a Pulse Width Programmable Gain Integrating Amplifier. Two Pulse Width Programmable Gain Amplifier architectures are proposed, one based on discrete components and another based on switched capacitors. From the operating requirements defined for the study, parameters are defined and simulations are carried out to validate the architecture. Subsequently, the circuit and the software are developed and tested. It is performed the evaluation of the circuits regarding the two proposed architectures, and from that, an architecture is selected to be improved, aiming the development of an integrated circuit in a future work.
Resumo:
Event-B is a formal method for modeling and verification of discrete transition systems. Event-B development yields proof obligations that must be verified (i.e. proved valid) in order to keep the produced models consistent. Satisfiability Modulo Theory solvers are automated theorem provers used to verify the satisfiability of logic formulas considering a background theory (or combination of theories). SMT solvers not only handle large firstorder formulas, but can also generate models and proofs, as well as identify unsatisfiable subsets of hypotheses (unsat-cores). Tool support for Event-B is provided by the Rodin platform: an extensible Eclipse based IDE that combines modeling and proving features. A SMT plug-in for Rodin has been developed intending to integrate alternative, efficient verification techniques to the platform. We implemented a series of complements to the SMT solver plug-in for Rodin, namely improvements to the user interface for when proof obligations are reported as invalid by the plug-in. Additionally, we modified some of the plug-in features, such as support for proof generation and unsat-core extraction, to comply with the SMT-LIB standard for SMT solvers. We undertook tests using applicable proof obligations to demonstrate the new features. The contributions described can potentially affect productivity in a positive manner.
Resumo:
A computação ubíqua é um paradigma no qual dispositivos com capacidade de processamento e comunicação são embutidos nos elementos comuns de nossas vidas (casas, carros, máquinas fotográficas, telefones, escolas, museus, etc), provendo serviços com um alto grau de mobilidade e transparência. O desenvolvimento de sistemas ubíquos é uma tarefa complexa, uma vez que envolve várias áreas da computação, como Engenharia de Software, Inteligência Artificial e Sistemas Distribuídos. Essa tarefa torna-se ainda mais complexa pela ausência de uma arquitetura de referência para guiar o desenvolvimento de tais sistemas. Arquiteturas de referência têm sido usadas para fornecer uma base comum e dar diretrizes para a construção de arquiteturas de softwares para diferentes classes de sistemas. Por outro lado, as linguagens de descrição arquitetural (ADLs) fornecem uma sintaxe para representação estrutural dos elementos arquiteturais, suas restrições e interações, permitindo-se expressar modelo arquitetural de sistemas. Atualmente não há, na literatura, ADLs baseadas em arquiteturas de referência para o domínio de computação ubíqua. De forma a permitir a modelagem arquitetural de aplicações ubíquas, esse trabalho tem como objetivo principal especificar UbiACME, uma linguagem de descrição arquitetural para aplicações ubíquas, bem como disponibilizar a ferramenta UbiACME Studio, que permitirá arquitetos de software realizar modelagens usando UbiACME. Para esse fim, inicialmente realizamos uma revisão sistemática, de forma a investigar na literatura relacionada com sistemas ubíquos, os elementos comuns a esses sistemas que devem ser considerados no projeto de UbiACME. Além disso, com base na revisão sistemática, definimos uma arquitetura de referência para sistemas ubíquos, RA-Ubi, que é a base para a definição dos elementos necessários para a modelagem arquitetural e, portanto, fornece subsídios para a definição dos elementos de UbiACME. Por fim, de forma a validar a linguagem e a ferramenta, apresentamos um experimento controlado onde arquitetos modelam uma aplicação ubíqua usando UbiACME Studio e comparam com a modelagem da mesma aplicação em SySML.
Resumo:
The continuous evolution of integrated circuit technology has allowed integrating thousands of transistors on a single chip. This is due to the miniaturization process, which reduces the diameter of wires and transistors. One drawback of this process is that the circuit becomes more fragile and susceptible to break, making the circuit more susceptible to permanent faults during the manufacturing process as well as during their lifetime. Coarse Grained Reconfigurable Architectures (CGRAs) have been used as an alternative to traditional architectures in an attempt to tolerate such faults due to its intrinsic hardware redundancy and high performance. This work proposes a fault tolerance mechanism in a CGRA in order to increase the architecture fault tolerance even considering a high fault rate. The proposed mechanism was added to the scheduler, which is the mechanism responsible for mapping instructions onto the architecture. The instruction mapping occurs at runtime, translating binary code without the need for recompilation. Furthermore, to allow faster implementation, instruction mapping is performed using a greedy module scheduling algorithm, which consists of a software pipeline technique for loop acceleration. The results show that, even with the proposed mechanism, the time for mapping instructions is still in order of microseconds. This result allows that instruction mapping process remains at runtime. In addition, a study was also carried out mapping scheduler rate. The results demonstrate that even at fault rates over 50% in functional units and interconnection components, the scheduler was able to map instructions onto the architecture in most of the tested applications.
Resumo:
High dependability, availability and fault-tolerance are open problems in Service-Oriented Architecture (SOA). The possibility of generating software applications by integrating services from heterogeneous domains, in a reliable way, makes worthwhile to face the challenges inherent to this paradigm. In order to ensure quality in service compositions, some research efforts propose the adoption of verification techniques to identify and correct errors. In this context, exception handling is a powerful mechanism to increase SOA quality. Several research works are concerned with mechanisms for exception propagation on web services, implemented in many languages and frameworks. However, to the extent of our knowledge, no works found evaluates these mechanisms in SOA with regard to the .NET framework. The main contribution of this paper is to evaluate and to propose exception propagation mechanisms in SOA to applications developed within the .NET framework. In this direction, this work: (i)extends a previous study, showing the need to propose a solution to the exception propagation in SOA to applications developed in .NET, and (ii) show a solution, based in model obtained from the results found in (i) and that will be applied in real cases through of faults injections and AOP techniques.
Resumo:
This Thesis main objective is to implement a supporting architecture to Autonomic Hardware systems, capable of manage the hardware running in reconfigurable devices. The proposed architecture implements manipulation, generation and communication functionalities, using the Context Oriented Active Repository approach. The solution consists in a Hardware-Software based architecture called "Autonomic Hardware Manager (AHM)" that contains an Active Repository of Hardware Components. Using the repository the architecture will be able to manage the connected systems at run time allowing the implementation of autonomic features such as self-management, self-optimization, self-description and self-configuration. The proposed architecture also contains a meta-model that allows the representation of the Operating Context for hardware systems. This meta-model will be used as basis to the context sensing modules, that are needed in the Active Repository architecture. In order to demonstrate the proposed architecture functionalities, experiments were proposed and implemented in order to proof the Thesis hypothesis and achieved objectives. Three experiments were planned and implemented: the Hardware Reconfigurable Filter, that consists of an application that implements Digital Filters using reconfigurable hardware; the Autonomic Image Segmentation Filter, that shows the project and implementation of an image processing autonomic application; finally, the Autonomic Autopilot application that consist of an auto pilot to unmanned aerial vehicles. In this work, the applications architectures were organized in modules, according their functionalities. Some modules were implemented using HDL and synthesized in hardware. Other modules were implemented kept in software. After that, applications were integrated to the AHM to allow their adaptation to different Operating Context, making them autonomic.
Resumo:
An important problem faced by the oil industry is to distribute multiple oil products through pipelines. Distribution is done in a network composed of refineries (source nodes), storage parks (intermediate nodes), and terminals (demand nodes) interconnected by a set of pipelines transporting oil and derivatives between adjacent areas. Constraints related to storage limits, delivery time, sources availability, sending and receiving limits, among others, must be satisfied. Some researchers deal with this problem under a discrete viewpoint in which the flow in the network is seen as batches sending. Usually, there is no separation device between batches of different products and the losses due to interfaces may be significant. Minimizing delivery time is a typical objective adopted by engineers when scheduling products sending in pipeline networks. However, costs incurred due to losses in interfaces cannot be disregarded. The cost also depends on pumping expenses, which are mostly due to the electricity cost. Since industrial electricity tariff varies over the day, pumping at different time periods have different cost. This work presents an experimental investigation of computational methods designed to deal with the problem of distributing oil derivatives in networks considering three minimization objectives simultaneously: delivery time, losses due to interfaces and electricity cost. The problem is NP-hard and is addressed with hybrid evolutionary algorithms. Hybridizations are mainly focused on Transgenetic Algorithms and classical multi-objective evolutionary algorithm architectures such as MOEA/D, NSGA2 and SPEA2. Three architectures named MOTA/D, NSTA and SPETA are applied to the problem. An experimental study compares the algorithms on thirty test cases. To analyse the results obtained with the algorithms Pareto-compliant quality indicators are used and the significance of the results evaluated with non-parametric statistical tests.