52 resultados para Modular Addition
Resumo:
Helium retention in irradiated tungsten leads to swelling, pore formation, sample exfoliation and embrittlement with deleterious consequences in many applications. In particular, the use of tungsten in future nuclear fusion plants is proposed due to its good refractory properties. However, serious concerns about tungsten survivability stems from the fact that it must withstand severe irradiation conditions. In magnetic fusion as well as in inertial fusion (particularly with direct drive targets), tungsten components will be exposed to low and high energy ion irradiation (helium), respectively. A common feature is that the most detrimental situations will take place in pulsed mode, i.e., high flux irradiation. There is increasing evidence of a correlation between a high helium flux and an enhancement of detrimental effects on tungsten. Nevertheless, the nature of these effects is not well understood due to the subtleties imposed by the exact temperature profile evolution, ion energy, pulse duration, existence of impurities and simultaneous irradiation with other species. Object Kinetic Monte Carlo is the technique of choice to simulate the evolution of radiation-induced damage inside solids in large temporal and space scales. We have used the recently developed code MMonCa (Modular Monte Carlo simulator), presented at COSIRES 2012 for the first time, to study He retention (and in general defect evolution) in tungsten samples irradiated with high intensity helium pulses. The code simulates the interactions among a large variety of defects and during the irradiation stage and the subsequent annealing steps. The results show that the pulsed mode leads to significantly higher He retention at temperatures higher than 700 K. In this paper we discuss the process of He retention in terms of trap evolution. In addition, we discuss the implications of these findings for inertial fusion.
Resumo:
Un porcentaje importante de las pérdidas de la producción agrícola se deben a las enfermedades que causan en los cultivos los hongos necrótrofos y vasculares. Para mejorar la productividad agrícola es necesario tener un conocimiento detallado de las bases genéticas y moleculares que regulan la resistencia de las plantas a este tipo de patógenos. En Arabidopsis thaliana la resistencia frente a patógenos necrótrofos, como el hongo Plectosphaerella cucumerina BMM (PcBMM), es genéticamente compleja y depende de la activación coordinada de distintas rutas de señalización, como las reguladas por las hormonas ácido salicílico (SA), ácido jasmónico (JA), etileno (ET) y ácido abscísico (ABA), así como de la síntesis de compuestos antimicrobianos derivados del Triptófano y de la integridad de la pared celular (Llorente et al., 2005, Hernández-Blanco et al., 2007; Delgado-Cerezo et al., 2012). Uno de los componentes claves en la regulación de la resistencia de las plantas a patógenos (incluidos hongos necrótrofos y biótrofos) es la proteína G heterotrimérica, un complejo proteico formado por tres subunidades (Gα, Gβ y Gγ), que también regula distintos procesos del desarrollo vegetal. En Arabidopsis hay un gen que codifica para la subunidad α (GPA1), otro para la β (AGB1), y tres genes para la subunidad γ (AGG1, AGG2 y AGG3). El complejo GPA1-AGB1-AGG (1-3) se activa y disocia tras la percepción de una señal específica, actuando el dímero AGB1-AGG1/2 como un monómero funcional que regula las respuestas de defensa (Delgado-Cerezo et al., 2012). Estudios transcriptómicos y análisis bioquímicos de la pared celular en los que se comparaban los mutantes agb1-2 y agg1 agg2, y plantas silvestres (Col-0) revelaron que la resistencia mediada por Gβ-Gγ1/2 no es dependiente de rutas de defensa previamente caracterizadas, y sugieren que la proteína G podría modular la composición/estructura (integridad) de la pared celular (Delgado-Cerezo et al., 2012). Recientemente, se ha demostrado que AGB1 es un componente fundamental de la respuesta inmune mediada por Pathogen- Associated Molecular Patterns (PTI), ya que los mutantes agb1-2 son incapaces de activar tras el tratamiento con PAMPs respuestas de inmunidad, como la producción de especies reactivas de oxígeno (ROS; Liu et al., 2013). Dada la importancia de la proteína G heterotrimérica en la regulación de la respuestas de defensa (incluida la PTI) realizamos un escrutinio de mutantes supresores de la susceptibilidad de agb1-2 al hongo necrótrofo, PcBMM, para identificar componentes adicionales de las rutas de señalización reguladas por AGB1. En este escrutinio se aislaron cuatro mutantes sgb (suppressors of agb1-2 susceptibility to pathogens), dos de los cuales, sgb10 y sgb11, se han caracterizado en la presente Tesis Doctoral. El mutante sgb10 es un segundo alelo nulo del gen MKP1 (At3g55270) que codifica la MAP quinasa-fosfatasa 1 (Bartels et al., 2009). Este mutante presenta lesiones espontáneas en plantas adultas y una activación constitutiva de las principales rutas de defensa (SA, JA y ET, y de metabolitos secundarios, como la camalexina), que explicaría su elevada resistencia a PcBMM y Pseudomonas syringae. Estudios epistáticos sugieren que la resistencia mediada por SGB10 no es dependiente, si no complementaria a la regulada por AGB1. El mutante sgb10 es capaz de restablecer en agb1-2 la producción de ROS y otras respuestas PTI (fosforilación de las MAPK6/3/4/11) tras el tratamiento con PAMPs tan diversos como flg22, elf18 y quitina, lo que demuestra el papel relevante de SGB10/MKP1 y de AGB1 en PTI. El mutante sgb11 se caracteriza por presentar un fenotipo similar a los mutantes irregular xylem (e.g. irx1) afectado en pared celular secundaria: irregularidades en las células xilemáticas, reducción en el tamaño de la roseta y altura de planta, y hojas con un mayor contenido de clorofila. La resistencia de sgb11 a PcBMM es independiente de agb1-2, ya que la susceptibilidad del doble mutante sgb11 agb1-2 es intermedia entre la de agb1-2 y sgb11. El mutante sgb11 no revierte la deficiente PTI de agb1-2 tras el tratamiento con flg22, lo que indica que está alterado en una ruta distinta de la regulada por SGB10. sgb11 presenta una sobreactivación de la ruta del ácido abscísico (ABA), lo que podría explicar su resistencia a PcBMM. La mutación sgb11 ha sido cartografiada en el cromosoma III de Arabidopsis entre los marcadores AthFUS6 (81,64cM) y nga6 (86,41cM) en un intervalo de aproximadamente 200 kb, que comprende genes, entre los que no se encuentra ninguno previamente descrito como IRX. El aislamiento y caracterización de SGB11 apoya la relevancia de la proteína G heterotrimérica en la regulación de la interconexión entre integridad de la pared celular e inmunidad. ABSTRACT A significant percentage of agricultural losses are due to diseases caused by necrotrophic and vascular fungi. To enhance crop yields is necessary to have a detailed knowledge of the genetic and molecular bases regulating plant resistance to these pathogens. Arabidopsis thaliana resistance to necrotrophic pathogens, such as Plectosphaerella cucumerina BMM (PcBMM) fungus, is genetically complex and depends on the coordinated activation of various signaling pathways. These include those regulated by salicylic acid (SA), jasmonic acid (JA), ethylene (ET) and abscisic acid (ABA) hormones and the synthesis of tryptophan-derived antimicrobial compounds and cell wall integrity (Llorente et al., 2005, Hernández-Blanco et al., 2007; Delgado-Cerezo et al., 2012). One key component in the regulation of plant resistance to pathogens (including biotrophic and necrotrophic fungi) is the heterotrimeric G-protein. This protein complex is formed by three subunits (Gα, Gβ and Gγ), which also regulates various plant developmental processes. In Arabidopsis only one gene encodes for subunits α (GPA1) and β (AGB1), and three genes for subunit γ (AGG1, AGG2 y AGG3). The complex GPA1- AGB1-AGG(1-3) is activated and dissociates after perception of an specific signal, AGB1- AGG1/2 acts as a functional monomer regulating defense responses (Delgado-Cerezo et al., 2012). Comparative transcriptomic studies and biochemical analyses of the cell wall of agb1-2 and agg1agg2 mutant and wild plants (Col-0), showed that Gβ-Gγ1/2-mediated resistance is not dependent on previously characterized defense pathways. In addition, it suggests that G protein may modulate the composition/structure (integrity) of the plant cell wall (Delgado-Cerezo et al., 2012). Recently, it has been shown that AGB1 is a critical component of the immune response mediated by Pathogen-Associated Molecular Patterns (PTI), as agb1-2 mutants are unable to activate immune responses such as oxygen reactive species (ROS) production after PAMPs treatment (Liu et al., 2013). Considering the importance of the heterotrimeric G protein in regulation of defense responses (including PTI), we performed a screening for suppressors of agb1-2 susceptibility to the necrotrophic fungus PcBMM. This would allow the identification of additional components of the signaling pathways regulated by AGB1. In this search four sgb mutants (suppressors of agb1-2 susceptibility to pathogens) were isolated, two of which, sgb10 and sgb11, have been characterized in this PhD thesis. sgb10 mutant is a second null allele of MKP1 gene (At3g55270), which encodes the MAP kinase-phosphatase 1 (Bartels et al., 2009). This mutant exhibits spontaneous lesions in adult plants and a constitutive activation of the main defense pathways (SA, JA and ET, and secondary metabolites, such as camalexin), which explains its high resistance to Pseudomonas syringae and PcBMM. Epistatic studies suggest that SGB10- mediated resistance is not dependent, but complementary to the regulated by AGB1. The sgb10 mutant is able to restore agb1-2 ROS production and other PTI responses (MAPK6/3/4/11 phosphorylation) upon treatment with PAMPs as diverse as, flg22, elf18 and chitin, demonstrating the relevant role of SGB10/MKP1 and AGB1 in PTI. sgb11 mutant is characterized by showing a similar phenotype to irregular xylem mutants (e.g. irx1), affected in secondary cell wall: irregular xylems cells, rosette size reduction and plant height, and higher chlorophyll content on leaves. The resistance of sgb11 to PcBMM is independent of agb1-2, as susceptibility of the double mutant agb1-2sgb11 is intermediate between agb1-2 and sgb11. The sgb11 mutant does not revert the deficient PTI response in agb1-2 after flg22 treatment, indicating that is altered in a pathway different to the one regulated by SGB10. sgb11 presents an over-activation of the abscisic acid pathway (ABA), which could explain its resistance to PcBMM. The sgb11 mutation has been mapped on chromosome III of Arabidopsis, between AthFUS6 (81.64 cM) and nga6 (86.41 cM) markers, in 200 kb interval, which does not include previously known IRX genes. The isolation and characterization of SGB11 supports the importance of heterotrimeric G protein in the regulation of the interconnection between the cell wall integrity and immunity.
Resumo:
This paper presents a GA-based optimization procedure for bioinspired heterogeneous modular multiconfigurable chained microrobots. When constructing heterogeneous chained modular robots that are composed of several different drive modules, one must select the type and position of the modules that form the chain. One must also develop new locomotion gaits that combine the different drive modules. These are two new features of heterogeneous modular robots that they do not share with homogeneous modular robots. This paper presents an offline control system that allows the development of new configuration schemes and locomotion gaits for these heterogeneous modular multiconfigurable chained microrobots. The offline control system is based on a simulator that is specifically designed for chained modular robots and allows them to develop and learn new locomotion patterns.
Resumo:
Junto a la “blanca y cúbica cabaña” hemos construido una pequeña caja blanca.
Resumo:
Proyecto de ampliación de la Escuela Técnica Superior de Arquitectura, Madrid
Resumo:
El poder disponer de la instrumentación y los equipos electrónicos resulta vital en el diseño de circuitos analógicos. Permiten realizar las pruebas necesarias y el estudio para el buen funcionamiento de estos circuitos. Los equipos se pueden diferenciar en instrumentos de excitación, los que proporcionan las señales al circuito, y en instrumentos de medida, los que miden las señales generadas por el circuito. Estos equipos sirven de gran ayuda pero a su vez tienen un precio elevado lo que impide en muchos casos disponer de ellos. Por esta principal desventaja, se hace necesario conseguir un dispositivo de bajo coste que sustituya de alguna manera a los equipos reales. Si el instrumento es de medida, este sistema de bajo coste puede ser implementado mediante un equipo hardware encargado de adquirir los datos y una aplicación ejecutándose en un ordenador donde analizarlos y presentarlos en la pantalla. En el caso de que el instrumento sea de excitación, el único cometido del sistema hardware es el de proporcionar las señales cuya configuración ha enviado el ordenador. En un equipo real, es el propio equipo el que debe realizar todas esas acciones: adquisición, procesamiento y presentación de los datos. Además, la dificultad de realizar modificaciones o ampliaciones de las funcionalidades en un instrumento tradicional con respecto a una aplicación de queda patente. Debido a que un instrumento tradicional es un sistema cerrado y uno cuya configuración o procesamiento de datos es hecho por una aplicación, algunas de las modificaciones serían realizables modificando simplemente el software del programa de control, por lo que el coste de las modificaciones sería menor. En este proyecto se pretende implementar un sistema hardware que tenga las características y realice las funciones del equipamiento real que se pueda encontrar en un laboratorio de electrónica. También el desarrollo de una aplicación encargada del control y el análisis de las señales adquiridas, cuya interfaz gráfica se asemeje a la de los equipos reales para facilitar su uso. ABSTRACT. The instrumentation and electronic equipment are vital for the design of analogue circuits. They enable to perform the necessary testing and study for the proper functioning of these circuits. The devices can be classified into the following categories: excitation instruments, which transmit the signals to the circuit, and measuring instruments, those in charge of measuring the signals produced by the circuit. This equipment is considerably helpful, however, its high price often makes it hardly accessible. For this reason, low price equipment is needed in order to replace real devices. If the instrument is measuring, this low cost system can be implemented by hardware equipment to acquire the data and running on a computer where analyzing and present on the screen application. In case of an excitation the instrument, the only task of the hardware system is to provide signals which sent the computer configuration. In a real instrument, is the instrument itself that must perform all these actions: acquisition, processing and presentation of data. Moreover, the difficulty of making changes or additions to the features in traditional devices with respect to an application running on a computer is evident. This is due to the fact that a traditional instrument is a closed system and its configuration or data processing is made by an application. Therefore, certain changes can be made just by modifying the control program software. Consequently, the cost of these modifications is lower. This project aims to implement a hardware system with the same features and functions of any real device, available in an electronics laboratory. Besides, it aims to develop an application for the monitoring and analysis of acquired signals. This application is provided with a graphic interface resembling those of real devices in order to facilitate its use.
Resumo:
3D Modular construction is poorly known and scarcely published in technical literature. In spite of that there are an increasing number of manufacturers offering their products in different countries. This method has largely evolved from early examples such as the American Gold Rush prefabrication in the nineteenth century, the Sears precut homes or Voisin´s prototypes for modular homes, to the end of the first half of the twentieth century. In this period a non negligible number of attempts in 3D modular construction have been carried out, ranging from theoretical proposals to several hundred or thousand units produced. Selected examples of modular architecture will be analyses in order to illustrate its technical evolution, concerning materials, structure, transportation and on site assembly. Success and failure factors of the different systems will be discussed. Conclusions about building criteria shown in them and their applicability in current architecture will be drawn.
Resumo:
This paper presents a new simulation environment aimed at heterogeneous chained modular robots. This simulator allows testing the feasibility of the design, checking how modules are going to perform in the field and verifying hardware, electronics and communication designs before the prototype is built, saving time and resources. The paper shows how the simulator is built and how it can be set up to adapt to new designs. It also gives some examples of its use showing different heterogeneous modular robots running in different environments.
Resumo:
Online services are no longer isolated. The release of public APIs and technologies such as web hooks are allowing users and developers to access their information easily. Intelligent agents could use this information to provide a better user experience across services, connecting services with smart automatic. behaviours or actions. However, agent platforms are not prepared to easily add external sources such as web services, which hinders the usage of agents in the so-called Evented or Live Web. As a solution, this paper introduces an event-based architecture for agent systems, in accordance with the new tendencies in web programming. In particular, it is focused on personal agents that interact with several web services. With this architecture, called MAIA, connecting to new web services does not involve any modification in the platform.
Resumo:
Nowadays, Wireless Ad Hoc Sensor Networks (WAHSNs), specially limited in energy and resources, are subject to development constraints and difficulties such as the increasing RF spectrum saturation at the unlicensed bands. Cognitive Wireless Sensor Networks (CWSNs), leaning on a cooperative communication model, develop new strategies to mitigate the inefficient use of the spectrum that WAHSNs face. However, few and poorly featured platforms allow their study due to their early research stage. This paper presents a versatile platform that brings together cognitive properties into WAHSNs. It combines hardware and software modules as an entire instrument to investigate CWSNs. The hardware fits WAHSN requirements in terms of size, cost, features, and energy. It allows communication over three different RF bands, becoming the only cognitive platform for WAHSNs with this capability. In addition, its modular and scalable design is widely adaptable to almost any WAHSN application. Significant features such as radio interface (RI) agility or energy consumption have been proven throughout different performance tests.
Resumo:
In the last decade, multi-sensor data fusion has become a broadly demanded discipline to achieve advanced solutions that can be applied in many real world situations, either civil or military. In Defence,accurate detection of all target objects is fundamental to maintaining situational awareness, to locating threats in the battlefield and to identifying and protecting strategically own forces. Civil applications, such as traffic monitoring, have similar requirements in terms of object detection and reliable identification of incidents in order to ensure safety of road users. Thanks to the appropriate data fusion technique, we can give these systems the power to exploit automatically all relevant information from multiple sources to face for instance mission needs or assess daily supervision operations. This paper focuses on its application to active vehicle monitoring in a particular area of high density traffic, and how it is redirecting the research activities being carried out in the computer vision, signal processing and machine learning fields for improving the effectiveness of detection and tracking in ground surveillance scenarios in general. Specifically, our system proposes fusion of data at a feature level which is extracted from a video camera and a laser scanner. In addition, a stochastic-based tracking which introduces some particle filters into the model to deal with uncertainty due to occlusions and improve the previous detection output is presented in this paper. It has been shown that this computer vision tracker contributes to detect objects even under poor visual information. Finally, in the same way that humans are able to analyze both temporal and spatial relations among items in the scene to associate them a meaning, once the targets objects have been correctly detected and tracked, it is desired that machines can provide a trustworthy description of what is happening in the scene under surveillance. Accomplishing so ambitious task requires a machine learning-based hierarchic architecture able to extract and analyse behaviours at different abstraction levels. A real experimental testbed has been implemented for the evaluation of the proposed modular system. Such scenario is a closed circuit where real traffic situations can be simulated. First results have shown the strength of the proposed system.
Resumo:
Recientemente, el paradigma de la computación en la nube ha recibido mucho interés por parte tanto de la industria como del mundo académico. Las infraestructuras cloud públicas están posibilitando nuevos modelos de negocio y ayudando a reducir costes. Sin embargo, una compañía podría desear ubicar sus datos y servicios en sus propias instalaciones, o tener que atenerse a leyes de protección de datos. Estas circunstancias hacen a las infraestructuras cloud privadas ciertamente deseables, ya sea para complementar a las públicas o para sustituirlas por completo. Por desgracia, las carencias en materia de estándares han impedido que las soluciones para la gestión de infraestructuras privadas se hayan desarrollado adecuadamente. Además, la multitud de opciones disponibles ha creado en los clientes el miedo a depender de una tecnología concreta (technology lock-in). Una de las causas de este problema es la falta de alineación entre la investigación académica y los productos comerciales, ya que aquella está centrada en el estudio de escenarios idealizados sin correspondencia con el mundo real, mientras que éstos consisten en soluciones desarrolladas sin tener en cuenta cómo van a encajar con los estándares más comunes o sin preocuparse de hacer públicos sus resultados. Con objeto de resolver este problema, propongo un sistema de gestión modular para infraestructuras cloud privadas enfocado en tratar con las aplicaciones en lugar de centrarse únicamente en los recursos hardware. Este sistema de gestión sigue el paradigma de la computación autónoma y está diseñado en torno a un modelo de información sencillo, desarrollado para ser compatible con los estándares más comunes. Este modelo divide el entorno en dos vistas, que sirven para separar aquello que debe preocupar a cada actor involucrado del resto de información, pero al mismo tiempo permitiendo relacionar el entorno físico con las máquinas virtuales que se despliegan encima de él. En dicho modelo, las aplicaciones cloud están divididas en tres tipos genéricos (Servicios, Trabajos de Big Data y Reservas de Instancias), para que así el sistema de gestión pueda sacar partido de las características propias de cada tipo. El modelo de información está complementado por un conjunto de acciones de gestión atómicas, reversibles e independientes, que determinan las operaciones que se pueden llevar a cabo sobre el entorno y que es usado para hacer posible la escalabilidad en el entorno. También describo un motor de gestión encargado de, a partir del estado del entorno y usando el ya mencionado conjunto de acciones, la colocación de recursos. Está dividido en dos niveles: la capa de Gestores de Aplicación, encargada de tratar sólo con las aplicaciones; y la capa del Gestor de Infraestructura, responsable de los recursos físicos. Dicho motor de gestión obedece un ciclo de vida con dos fases, para así modelar mejor el comportamiento de una infraestructura real. El problema de la colocación de recursos es atacado durante una de las fases (la de consolidación) por un resolutor de programación entera, y durante la otra (la online) por un heurístico hecho ex-profeso. Varias pruebas han demostrado que este acercamiento combinado es superior a otras estrategias. Para terminar, el sistema de gestión está acoplado a arquitecturas de monitorización y de actuadores. Aquella estando encargada de recolectar información del entorno, y ésta siendo modular en su diseño y capaz de conectarse con varias tecnologías y ofrecer varios modos de acceso. ABSTRACT The cloud computing paradigm has raised in popularity within the industry and the academia. Public cloud infrastructures are enabling new business models and helping to reduce costs. However, the desire to host company’s data and services on premises, and the need to abide to data protection laws, make private cloud infrastructures desirable, either to complement or even fully substitute public oferings. Unfortunately, a lack of standardization has precluded private infrastructure management solutions to be developed to a certain level, and a myriad of diferent options have induced the fear of lock-in in customers. One of the causes of this problem is the misalignment between academic research and industry ofering, with the former focusing in studying idealized scenarios dissimilar from real-world situations, and the latter developing solutions without taking care about how they f t with common standards, or even not disseminating their results. With the aim to solve this problem I propose a modular management system for private cloud infrastructures that is focused on the applications instead of just the hardware resources. This management system follows the autonomic system paradigm, and is designed around a simple information model developed to be compatible with common standards. This model splits the environment in two views that serve to separate the concerns of the stakeholders while at the same time enabling the traceability between the physical environment and the virtual machines deployed onto it. In it, cloud applications are classifed in three broad types (Services, Big Data Jobs and Instance Reservations), in order for the management system to take advantage of each type’s features. The information model is paired with a set of atomic, reversible and independent management actions which determine the operations that can be performed over the environment and is used to realize the cloud environment’s scalability. From the environment’s state and using the aforementioned set of actions, I also describe a management engine tasked with the resource placement. It is divided in two tiers: the Application Managers layer, concerned just with applications; and the Infrastructure Manager layer, responsible of the actual physical resources. This management engine follows a lifecycle with two phases, to better model the behavior of a real infrastructure. The placement problem is tackled during one phase (consolidation) by using an integer programming solver, and during the other (online) with a custom heuristic. Tests have demonstrated that this combined approach is superior to other strategies. Finally, the management system is paired with monitoring and actuators architectures. The former able to collect the necessary information from the environment, and the later modular in design and capable of interfacing with several technologies and ofering several access interfaces.
Resumo:
Logic programming (LP) is a family of high-level programming languages which provides high expressive power. With LP, the programmer writes the properties of the result and / or executable specifications instead of detailed computation steps. Logic programming systems which feature tabled execution and constraint logic programming have been shown to increase the declarativeness and efficiency of Prolog, while at the same time making it possible to write very expressive programs. Tabled execution avoids infinite failure in some cases, while improving efficiency in programs which repeat computations. CLP reduces the search tree and brings the power of solving (in)equations over arbitrary domains. Similarly to the LP case, CLP systems can also benefit from the power of tabling. Previous implementations which take ful advantage of the ideas behind tabling (e.g., forcing suspension, answer subsumption, etc. wherever it is necessary to avoid recomputation and terminate whenever possible) did not offer a simple, well-documented, easy-to-understand interface. This would be necessary to make the integratation of arbitrary CLP solvers into existing tabling systems possible. This clearly hinders a more widespread usage of the combination of both facilities. In this thesis we examine the requirements that a constraint solver must fulfill in order to be interfaced with a tabling system. We propose and implement a framework, which we have called Mod TCLP, with a minimal set of operations (e.g., entailment checking and projection) which the constraint solver has to provide to the tabling engine. We validate the design of Mod TCLP by a series of use cases: we re-engineer a previously existing tabled constrain domain (difference constraints) which was connected in an ad-hoc manner with the tabling engine in Ciao Prolog; we integrateHolzbauer’s CLP(Q) implementationwith Ciao Prolog’s tabling engine; and we implement a constraint solver over (finite) lattices. We evaluate its performance with several benchmarks that implement a simple abstract interpreter whose fixpoint is reached by means of tabled execution, and whose domain operations are handled by the constraint over (finite) lattices, where TCLP avoids recomputing subsumed abstractions.---ABSTRACT---La programación lógica con restricciones (CLP) y la tabulación son extensiones de la programación lógica que incrementan la declaratividad y eficiencia de Prolog, al mismo tiempo que hacen posible escribir programasmás expresivos. Las implementaciones anteriores que integran completamente ambas extensiones, incluyendo la suspensión de la ejecución de objetivos siempre que sea necesario, la implementación de inclusión (subsumption) de respuestas, etc., en todos los puntos en los que sea necesario para evitar recomputaciones y garantizar la terminación cuando sea posible, no han proporcionan una interfaz simple, bien documentada y fácil de entender. Esta interfaz es necesaria para permitir integrar resolutores de CLP arbitrarios en el sistema de tabulación. Esto claramente dificulta un uso más generalizado de la integración de ambas extensiones. En esta tesis examinamos los requisitos que un resolutor de restricciones debe cumplir para ser integrado con un sistema de tabulación. Proponemos un esquema (y su implementación), que hemos llamadoMod TCLP, que requiere un reducido conjunto de operaciones (en particular, y entre otras, entailment y proyección de almacenes de restricciones) que el resolutor de restricciones debe ofrecer al sistema de tabulación. Hemos validado el diseño de Mod TCLP con una serie de casos de uso: la refactorización de un sistema de restricciones (difference constraints) previamente conectado de un modo ad-hoc con la tabulación de Ciao Prolog; la integración del sistema de restricciones CLP(Q) de Holzbauer; y la implementación de un resolutor de restricciones sobre retículos finitos. Hemos evaluado su rendimiento con varios programas de prueba, incluyendo la implementación de un intérprete abstracto que alcanza su punto fijo mediante el sistema de tabulación y en el que las operaciones en el dominio son realizadas por el resolutor de restricciones sobre retículos (finitos) donde TCLP evita la recomputación de valores abstractos de las variables ya contenidos en llamadas anteriores.
Resumo:
Resource analysis aims at inferring the cost of executing programs for any possible input, in terms of a given resource, such as the traditional execution steps, time ormemory, and, more recently energy consumption or user defined resources (e.g., number of bits sent over a socket, number of database accesses, number of calls to particular procedures, etc.). This is performed statically, i.e., without actually running the programs. Resource usage information is useful for a variety of optimization and verification applications, as well as for guiding software design. For example, programmers can use such information to choose different algorithmic solutions to a problem; program transformation systems can use cost information to choose between alternative transformations; parallelizing compilers can use cost estimates for granularity control, which tries to balance the overheads of task creation and manipulation against the benefits of parallelization. In this thesis we have significatively improved an existing prototype implementation for resource usage analysis based on abstract interpretation, addressing a number of relevant challenges and overcoming many limitations it presented. The goal of that prototype was to show the viability of casting the resource analysis as an abstract domain, and howit could overcome important limitations of the state-of-the-art resource usage analysis tools. For this purpose, it was implemented as an abstract domain in the abstract interpretation framework of the CiaoPP system, PLAI.We have improved both the design and implementation of the prototype, for eventually allowing an evolution of the tool to the industrial application level. The abstract operations of such tool heavily depend on the setting up and finding closed-form solutions of recurrence relations representing the resource usage behavior of program components and the whole program as well. While there exist many tools, such as Computer Algebra Systems (CAS) and libraries able to find closed-form solutions for some types of recurrences, none of them alone is able to handle all the types of recurrences arising during program analysis. In addition, there are some types of recurrences that cannot be solved by any existing tool. This clearly constitutes a bottleneck for this kind of resource usage analysis. Thus, one of the major challenges we have addressed in this thesis is the design and development of a novel modular framework for solving recurrence relations, able to combine and take advantage of the results of existing solvers. Additionally, we have developed and integrated into our novel solver a technique for finding upper-bound closed-form solutions of a special class of recurrence relations that arise during the analysis of programs with accumulating parameters. Finally, we have integrated the improved resource analysis into the CiaoPP general framework for resource usage verification, and specialized the framework for verifying energy consumption specifications of embedded imperative programs in a real application, showing the usefulness and practicality of the resulting tool.---ABSTRACT---El Análisis de recursos tiene como objetivo inferir el coste de la ejecución de programas para cualquier entrada posible, en términos de algún recurso determinado, como pasos de ejecución, tiempo o memoria, y, más recientemente, el consumo de energía o recursos definidos por el usuario (por ejemplo, número de bits enviados a través de un socket, el número de accesos a una base de datos, cantidad de llamadas a determinados procedimientos, etc.). Ello se realiza estáticamente, es decir, sin necesidad de ejecutar los programas. La información sobre el uso de recursos resulta muy útil para una gran variedad de aplicaciones de optimización y verificación de programas, así como para asistir en el diseño de los mismos. Por ejemplo, los programadores pueden utilizar dicha información para elegir diferentes soluciones algorítmicas a un problema; los sistemas de transformación de programas pueden utilizar la información de coste para elegir entre transformaciones alternativas; los compiladores paralelizantes pueden utilizar las estimaciones de coste para realizar control de granularidad, el cual trata de equilibrar el coste debido a la creación y gestión de tareas, con los beneficios de la paralelización. En esta tesis hemos mejorado de manera significativa la implementación de un prototipo existente para el análisis del uso de recursos basado en interpretación abstracta, abordando diversos desafíos relevantes y superando numerosas limitaciones que éste presentaba. El objetivo de dicho prototipo era mostrar la viabilidad de definir el análisis de recursos como un dominio abstracto, y cómo se podían superar las limitaciones de otras herramientas similares que constituyen el estado del arte. Para ello, se implementó como un dominio abstracto en el marco de interpretación abstracta presente en el sistema CiaoPP, PLAI. Hemos mejorado tanto el diseño como la implementación del mencionado prototipo para posibilitar su evolución hacia una herramienta utilizable en el ámbito industrial. Las operaciones abstractas de dicha herramienta dependen en gran medida de la generación, y posterior búsqueda de soluciones en forma cerrada, de relaciones recurrentes, las cuales modelizan el comportamiento, respecto al consumo de recursos, de los componentes del programa y del programa completo. Si bien existen actualmente muchas herramientas capaces de encontrar soluciones en forma cerrada para ciertos tipos de recurrencias, tales como Sistemas de Computación Algebraicos (CAS) y librerías de programación, ninguna de dichas herramientas es capaz de tratar, por sí sola, todos los tipos de recurrencias que surgen durante el análisis de recursos. Existen incluso recurrencias que no las puede resolver ninguna herramienta actual. Esto constituye claramente un cuello de botella para este tipo de análisis del uso de recursos. Por lo tanto, uno de los principales desafíos que hemos abordado en esta tesis es el diseño y desarrollo de un novedoso marco modular para la resolución de relaciones recurrentes, combinando y aprovechando los resultados de resolutores existentes. Además de ello, hemos desarrollado e integrado en nuestro nuevo resolutor una técnica para la obtención de cotas superiores en forma cerrada de una clase característica de relaciones recurrentes que surgen durante el análisis de programas lógicos con parámetros de acumulación. Finalmente, hemos integrado el nuevo análisis de recursos con el marco general para verificación de recursos de CiaoPP, y hemos instanciado dicho marco para la verificación de especificaciones sobre el consumo de energía de programas imperativas embarcados, mostrando la viabilidad y utilidad de la herramienta resultante en una aplicación real.
Resumo:
Coordinación Modular