941 resultados para cost-informed process execution
Resumo:
Effective static analyses have been proposed which infer bounds on the number of resolutions or reductions. These have the advantage of being independent from the platform on which the programs are executed and have been shown to be useful in a number of applications, such as granularity control in parallel execution. On the other hand, in distributed computation scenarios where platforms with different capabilities come into play, it is necessary to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution times. With this objective in mind, we propose an approach which combines compile-time analysis for cost bounds with a one-time profiling of the platform in order to determine the valúes of certain parameters for a given platform. These parameters calíbrate a cost model which, from then on, is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in the given platform. The approach has been implemented and integrated in the CiaoPP system.
Resumo:
Proof carrying code is a general methodology for certifying that the execution of an untrusted mobile code is safe, according to a predefined safety policy. The basic idea is that the code supplier attaches a certifícate (or proof) to the mobile code which, then, the consumer checks in order to ensure that the code is indeed safe. The potential benefit is that the consumer's task is reduced from the level of proving to the level of checking, a much simpler task. Recently, the abstract interpretation techniques developed in logic programming have been proposed as a basis for proof carrying code [1]. To this end, the certifícate is generated from an abstract interpretation-based proof of safety. Intuitively, the verification condition is extracted from a set of assertions guaranteeing safety and the answer table generated during the analysis. Given this information, it is relatively simple and fast to verify that the code does meet this proof and so its execution is safe. This extended abstract reports on experiments which illustrate several issues involved in abstract interpretation-based code certification. First, we describe the implementation of our system in the context of CiaoPP: the preprocessor of the Ciao multi-paradigm (constraint) logic programming system. Then, by means of some experiments, we show how code certification is aided in the implementation of the framework. Finally, we discuss the application of our method within the área of pervasive systems which may lack the necessary computing resources to verify safety on their own. We herein illustrate the relevance of the information inferred by existing cost analysis to control resource usage in this context. Moreover, since the (rather complex) analysis phase is replaced by a simpler, efficient checking process at the code consumer side, we believe that our abstract interpretation-based approach to proof-carrying code becomes practically applicable to this kind of systems.
Resumo:
La Organización Mundial de la Salud (OMS) prevé que para el año 2020, el Daño Cerebral Adquirido (DCA) estará entre las 10 causas más comunes de discapacidad. Estas lesiones, dadas sus consecuencias físicas, sensoriales, cognitivas, emocionales y socioeconómicas, cambian dramáticamente la vida de los pacientes y sus familias. Las nuevas técnicas de intervención precoz y el desarrollo de la medicina intensiva en la atención al DCA han mejorado notablemente la probabilidad de supervivencia. Sin embargo, hoy por hoy, las lesiones cerebrales no tienen ningún tratamiento quirúrgico que tenga por objetivo restablecer la funcionalidad perdida, sino que las terapias rehabilitadoras se dirigen hacia la compensación de los déficits producidos. Uno de los objetivos principales de la neurorrehabilitación es, por tanto, dotar al paciente de la capacidad necesaria para ejecutar las Actividades de Vida Diaria (AVDs) necesarias para desarrollar una vida independiente, siendo fundamentales aquellas en las que la Extremidad Superior (ES) está directamente implicada, dada su gran importancia a la hora de la manipulación de objetos. Con la incorporación de nuevas soluciones tecnológicas al proceso de neurorrehabilitación se pretende alcanzar un nuevo paradigma centrado en ofrecer una práctica personalizada, monitorizada y ubicua con una valoración continua de la eficacia y de la eficiencia de los procedimientos y con capacidad de generar conocimientos que impulsen la ruptura del paradigma de actual. Los nuevos objetivos consistirán en minimizar el impacto de las enfermedades que afectan a la capacidad funcional de las personas, disminuir el tiempo de incapacidad y permitir una gestión más eficiente de los recursos. Estos objetivos clínicos, de gran impacto socio-económico, sólo pueden alcanzarse desde una apuesta decidida en nuevas tecnologías, metodologías y algoritmos capaces de ocasionar la ruptura tecnológica necesaria que permita superar las barreras que hasta el momento han impedido la penetración tecnológica en el campo de la rehabilitación de manera universal. De esta forma, los trabajos y resultados alcanzados en la Tesis son los siguientes: 1. Modelado de AVDs: como paso previo a la incorporación de ayudas tecnológicas al proceso rehabilitador, se hace necesaria una primera fase de modelado y formalización del conocimiento asociado a la ejecución de las actividades que se realizan como parte de la terapia. En particular, las tareas más complejas y a su vez con mayor repercusión terapéutica son las AVDs, cuya formalización permitirá disponer de modelos de movimiento sanos que actuarán de referencia para futuros desarrollos tecnológicos dirigidos a personas con DCA. Siguiendo una metodología basada en diagramas de estados UML se han modelado las AVDs 'servir agua de una jarra' y 'coger un botella' 2. Monitorización ubícua del movimiento de la ES: se ha diseñado, desarrollado y validado un sistema de adquisición de movimiento basado en tecnología inercial que mejora las limitaciones de los dispositivos comerciales actuales (coste muy elevado e incapacidad para trabajar en entornos no controlados); los altos coeficientes de correlación y los bajos niveles de error obtenidos en los corregistros llevados a cabo con el sistema comercial BTS SMART-D demuestran la alta precisión del sistema. También se ha realizado un trabajo de investigación exploratorio de un sistema de captura de movimiento de coste muy reducido basado en visión estereoscópica, habiéndose detectado los puntos clave donde se hace necesario incidir desde un punto de vista tecnológico para su incorporación en un entorno real 3. Resolución del Problema Cinemático Inverso (PCI): se ha diseñado, desarrollado y validado una solución al PCI cuando el manipulador se corresponde con una ES humana estudiándose 2 posibles alternativas, una basada en la utilización de un Perceptrón Multicapa (PMC) y otra basada en sistemas Artificial Neuro-Fuzzy Inference Systems (ANFIS). La validación, llevada a cabo utilizando información relativa a los modelos disponibles de AVDs, indica que una solución basada en un PMC con 3 neuronas en la capa de entrada, una capa oculta también de 3 neuronas y una capa de salida con tantas neuronas como Grados de Libertad (GdLs) tenga el modelo de la ES, proporciona resultados, tanto de precisión como de tiempo de cálculo, que la hacen idónea para trabajar en sistemas con requisitos de tiempo real 4. Control inteligente assisted-as-needed: se ha diseñado, desarrollado y validado un algoritmo de control assisted-as-needed para una ortesis robótica con capacidades de actuación anticipatoria de la que existe un prototipo implementado en la actualidad. Los resultados obtenidos demuestran cómo el sistema es capaz de adaptarse al perfil disfuncional del paciente activando la ayuda en instantes anteriores a la ocurrencia de movimientos incorrectos. Esta estrategia implica un aumento en la participación del paciente y, por tanto, en su actividad muscular, fomentándose los procesos la plasticidad cerebral responsables del reaprendizaje o readaptación motora 5. Simuladores robóticos para planificación: se propone la utilización de un simulador robótico assisted-as-needed como herramienta de planificación de sesiones de rehabilitación personalizadas y con un objetivo clínico marcado en las que interviene una ortesis robotizada. Los resultados obtenidos evidencian como, tras la ejecución de ciertos algoritmos sencillos, es posible seleccionar automáticamente una configuración para el algoritmo de control assisted-as-needed que consigue que la ortesis se adapte a los criterios establecidos desde un punto de vista clínico en función del paciente estudiado. Estos resultados invitan a profundizar en el desarrollo de algoritmos más avanzados de selección de parámetros a partir de baterías de simulaciones Estos trabajos han servido para corroborar las hipótesis de investigación planteadas al inicio de la misma, permitiendo, asimismo, la apertura de nuevas líneas de investigación. Summary The World Health Organization (WHO) predicts that by the year 2020, Acquired Brain Injury (ABI) will be among the ten most common ailments. These injuries dramatically change the life of the patients and their families due to their physical, sensory, cognitive, emotional and socio-economic consequences. New techniques of early intervention and the development of intensive ABI care have noticeably improved the survival rate. However, in spite of these advances, brain injuries still have no surgical or pharmacological treatment to re-establish the lost functions. Neurorehabilitation therapies address this problem by restoring, minimizing or compensating the functional alterations in a person disabled because of a nervous system injury. One of the main objectives of Neurorehabilitation is to provide patients with the capacity to perform specific Activities of the Daily Life (ADL) required for an independent life, especially those in which the Upper Limb (UL) is directly involved due to its great importance in manipulating objects within the patients' environment. The incorporation of new technological aids to the neurorehabilitation process tries to reach a new paradigm focused on offering a personalized, monitored and ubiquitous practise with continuous assessment of both the efficacy and the efficiency of the procedures and with the capacity of generating new knowledge. New targets will be to minimize the impact of the sicknesses affecting the functional capabilitiies of the subjects, to decrease the time of the physical handicap and to allow a more efficient resources handling. These targets, of a great socio-economic impact, can only be achieved by means of new technologies and algorithms able to provoke the technological break needed to beat the barriers that are stopping the universal penetration of the technology in the field of rehabilitation. In this way, this PhD Thesis has achieved the following results: 1. ADL Modeling: as a previous step to the incorporation of technological aids to the neurorehabilitation process, it is necessary a first modelling and formalization phase of the knowledge associated to the execution of the activities that are performed as a part of the therapy. In particular, the most complex and therapeutically relevant tasks are the ADLs, whose formalization will produce healthy motion models to be used as a reference for future technological developments. Following a methodology based on UML state-chart diagrams, the ADLs 'serving water from a jar' and 'picking up a bottle' have been modelled 2. Ubiquitous monitoring of the UL movement: it has been designed, developed and validated a motion acquisition system based on inertial technology that improves the limitations of the current devices (high monetary cost and inability of working within uncontrolled environments); the high correlation coefficients and the low error levels obtained throughout several co-registration sessions with the commercial sys- tem BTS SMART-D show the high precision of the system. Besides an exploration of a very low cost stereoscopic vision-based motion capture system has been carried out and the key points where it is necessary to insist from a technological point of view have been detected 3. Inverse Kinematics (IK) problem solving: a solution to the IK problem has been proposed for a manipulator that corresponds to a human UL. This solution has been faced by means of two different alternatives, one based on a Mulilayer Perceptron (MLP) and another based on Artificial Neuro-Fuzzy Inference Systems (ANFIS). The validation of these solutions, carried out using the information regarding the previously generated motion models, indicate that a MLP-based solution, with an architecture consisting in 3 neurons in the input layer, one hidden layer of 3 neurons and an output layer with as many neurons as the number of Degrees of Freedom (DoFs) that the UL model has, is the one that provides the best results both in terms of precission and in terms of processing time, making in idoneous to be integrated within a system with real time restrictions 4. Assisted-as-needed intelligent control: an assisted-as-needed control algorithm with anticipatory actuation capabilities has been designed, developed and validated for a robotic orthosis of which there is an already implemented prototype. Obtained results demonstrate that the control system is able to adapt to the dysfunctional profile of the patient by triggering the assistance right before an incorrect movement is going to take place. This strategy implies an increase in the participation of the patients and in his or her muscle activity, encouraging the neural plasticity processes in charge of the motor learning 5. Planification with a robotic simulator: in this work a robotic simulator is proposed as a planification tool for personalized rehabilitation sessions under a certain clinical criterium. Obtained results indicate that, after the execution of simple parameter selection algorithms, it is possible to automatically choose a specific configuration that makes the assisted-as-needed control algorithm to adapt both to the clinical criteria and to the patient. These results invite researchers to work in the development of more complex parameter selection algorithms departing from simulation batteries Obtained results have been useful to corroborate the hypotheses set out at the beginning of this PhD Thesis. Besides, they have allowed the creation of new research lines in all the studied application fields.
Resumo:
Effective static analyses have been proposed which allow inferring functions which bound the number of resolutions or reductions. These have the advantage of being independent from the platform on which the programs are executed and such bounds have been shown useful in a number of applications, such as granularity control in parallel execution. On the other hand, in certain distributed computation scenarios where different platforms come into play, with each platform having different capabilities, it is more interesting to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution time. With this objective in mind, we propose a method which allows inferring upper and lower bounds on the execution times of procedures of a program in a given execution platform. The approach combines compile-time cost bounds analysis with a one-time profiling of the platform in order to determine the values of certain constants for that platform. These constants calibrate a cost model which from then on is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in the given platform. The approach has been implemented and integrated in the CiaoPP system.
Resumo:
The advantages of tabled evaluation regarding program termination and reduction of complexity are well known —as are the significant implementation, portability, and maintenance efforts that some proposals (especially those based on suspension) require. This implementation effort is reduced by program transformation-based continuation call techniques, at some efficiency cost. However, the traditional formulation of this proposal by Ramesh and Cheng limits the interleaving of tabled and non-tabled predicates and thus cannot be used as-is for arbitrary programs. In this paper we present a complete translation for the continuation call technique which, using the runtime support needed for the traditional proposal, solves these problems and makes it possible to execute arbitrary tabled programs. We present performance results which show that CCall offers a useful tradeoff that can be competitive with state-of-the-art implementations.
Resumo:
En la coyuntura actual, en la que existe por un lado, exceso en la oferta de vivienda (de alto precio o de segunda residencia), y aparece por otro demanda de vivienda (de bajo precio y/o social), el mercado inmobiliario se encuentra paradójicamente bloqueado. Así, surge esta investigación como fruto de este momento histórico, en el cual se somete a debate económico el producto vivienda, no solo como consecuencia de la profunda crisis económica, sino también para la correcta gestión de los recursos desde el punto de vista de lo eficiente y sostenible. Se parte de la hipótesis de que es necesario determinar un estimador de costes de construcción de vivienda autopromovida como una de las soluciones a la habitación en el medio rural de Extremadura, para lo cual se ha tomado como modelo de análisis concretamente la Vivienda Autopromovida subvencionada por la Junta de Extremadura en el marco de la provincia de Cáceres. Con esta investigación se pretende establecer una herramienta matemática precisa que permita determinar la inversión a los promotores, el posible margen de beneficios a los contratistas y el valor real de la garantía en el préstamo a las entidades financieras. Pero el objetivo de mayor proyección social de esta investigación consiste en facilitar una herramienta sencilla a la Junta de Extremadura para que pueda establecer las ayudas de una manera proporcional. De este modo se ayuda a optimizar los recursos, lo cual en época de crisis resulta aun más acuciante, ya que conociendo previamente y con bastante exactitud el importe de las obras se pueden dirigir las ayudas de forma proporcional a las necesidades reales de la ejecución. De hecho, ciertas características difíciles de cuantificar para determinar las ayudas en materia de vivienda, como la influencia del número de miembros familiares o la atención a la discapacidad, se verían contempladas de forma indirecta en el coste estimado con el método aquí propuesto, ya que suponen siempre un aumento de las superficies construidas y útiles, de los huecos de fachadas o del tamaño de locales húmedos y por tanto se contemplan en la ecuación del modelo determinado. Por último, contar con un estimador de costes potencia la forma de asentamiento de la construcción mediante autopromocion de viviendas ya que ayuda a la toma de decisiones al particular, subvencionado o no. En efecto, la herramienta es valida en cierta medida para cualquier autopromocion, constituye un sistema de construcción con las menores posibilidades especulativas y lo más sostenible, es abundante en toda Extremadura, y consigue que el sector de la construcción sea un sistema más eficiente al optimizar su proceso económico de producción. SUMMARY Under the present circumstances, in which there is, on one hand, an excess in the supply of housing (high-price or second-home), and on the other hand a demand for housing (low cost and/or social), paradoxically the property market is at a standstill. This research has come about as a result of this moment in time, in which the product: housing, is undergoing economic debate, not only on account of this serious economic crisis, but for the proper management of resources from the point of view of efficiency and sustainability. A building-costs estimator for owner-developed housing is deemed necessary as one of the solutions for the rural environment that is Extremadura. To this end, it is the Owner-Developed House which has been taken as analysis model. It is subsidized by the Extremadura Regional Government in Caceres Province. This research establishes an accurate mathematical tool to work out the developers’ investment, the builder’s potential profit margin and the reality of the loan for the Financial Institution. But the result of most social relevance in this research is to provide the Extremadura Regional Government with a simple tool, so that it can draw up the Subventions proportionally. Thus, the resources are optimized, an even more vital matter in times of economic slump, due to the fact that if the cost of the building works is known with some accuracy beforehand, the subventions can be allocated in a way that is proportional to the real needs of execution. In fact certain elements related to housing subventions which are hard to quantify, such as the influence of number of family members or disability support, would be covered indirectly in cost estimate with the proposed method, since they inevitably involve an increase in built area, exterior wall openings and the size of plumbed rooms. As such they are covered in the determined model equation. Lastly, the availability of a cost-estimator reinforces the ownerdeveloped building model, since it assists decision-making by the individual, whether subsidized or not. This is because the tool is valid to some extent in any owner-development, and this building scheme, which is common in Extremadura, is the most sustainable, and the least liable to speculation. It makes the building sector more efficient by optimizing the economic production process.
Resumo:
The research in this thesis is related to static cost and termination analysis. Cost analysis aims at estimating the amount of resources that a given program consumes during the execution, and termination analysis aims at proving that the execution of a given program will eventually terminate. These analyses are strongly related, indeed cost analysis techniques heavily rely on techniques developed for termination analysis. Precision, scalability, and applicability are essential in static analysis in general. Precision is related to the quality of the inferred results, scalability to the size of programs that can be analyzed, and applicability to the class of programs that can be handled by the analysis (independently from precision and scalability issues). This thesis addresses these aspects in the context of cost and termination analysis, from both practical and theoretical perspectives. For cost analysis, we concentrate on the problem of solving cost relations (a form of recurrence relations) into closed-form upper and lower bounds, which is the heart of most modern cost analyzers, and also where most of the precision and applicability limitations can be found. We develop tools, and their underlying theoretical foundations, for solving cost relations that overcome the limitations of existing approaches, and demonstrate superiority in both precision and applicability. A unique feature of our techniques is the ability to smoothly handle both lower and upper bounds, by reversing the corresponding notions in the underlying theory. For termination analysis, we study the hardness of the problem of deciding termination for a speci�c form of simple loops that arise in the context of cost analysis. This study gives a better understanding of the (theoretical) limits of scalability and applicability for both termination and cost analysis.
Resumo:
The construction cost estimation systems in Spain are undeveloped and, hence, infrequently used by technicians and professionals in the building sector. However, estimation of an approximate real cost prior to the execution of the work is compulsory under current legal regulations (Technical Building Code). Therefore, the development of research projects on construction cost estimation models such as the one described and demonstrated in this talk is extremely interesting.
Resumo:
This research is concerned with the experimental software engineering area, specifically experiment replication. Replication has traditionally been viewed as a complex task in software engineering. This is possibly due to the present immaturity of the experimental paradigm applied to software development. Researchers usually use replication packages to replicate an experiment. However, replication packages are not the solution to all the information management problems that crop up when successive replications of an experiment accumulate. This research borrows ideas from the software configuration management and software product line paradigms to support the replication process. We believe that configuration management can help to manage and administer information from one replication to another: hypotheses, designs, data analysis, etc. The software product line paradigm can help to organize and manage any changes introduced into the experiment by each replication. We expect the union of the two paradigms in replication to improve the planning, design and execution of further replications and their alignment with existing replications. Additionally, this research work will contribute a web support environment for archiving information related to different experiment replications. Additionally, it will provide flexible enough information management support for running replications with different numbers and types of changes. Finally, it will afford massive storage of data from different replications. Experimenters working collaboratively on the same experiment must all have access to the different experiments.
Resumo:
Polysilicon cost impacts significantly on the photovoltaics (PV) cost and on the energy payback time. Nowadays, the besetting production process is the so called Siemens process, polysilicon deposition by chemical vapor deposition (CVD) from Trichlorosilane. Polysilicon purification level for PV is to a certain extent less demanding that for microelectronics. At the Instituto de Energía Solar (IES) research on this subject is performed through a Siemens process-type laboratory reactor. Through the laboratory CVD prototype at the IES laboratories, valuable information about the phenomena involved in the polysilicon deposition process and the operating conditions is obtained. Polysilicon deposition by CVD is a complex process due to the big number of parameters involved. A study on the influence of temperature and inlet gas mixture composition on the polysilicon deposition growth rate, based on experimental experience, is shown. Moreover, CVD process accounts for the largest contribution to the energy consumption of the polysilicon production. In addition, radiation phenomenon is the major responsible for low energetic efficiency of the whole process. This work presents a model of radiation heat loss, and the theoretical calculations are confirmed experimentally through a prototype reactor at our disposal, yielding a valuable know-how for energy consumption reduction at industrial Siemens reactors.
Resumo:
Solar drying is one of the important processes used for extending the shelf life of agricultural products. Regarding consumer requirements, solar drying should be more suitable in terms of curtailing total drying time and preserving product quality. Therefore, the objective of this study was to develop a fuzzy logic-based control system, which performs a ?human-operator-like? control approach through using the previously developed low-cost model-based sensors. Fuzzy logic toolbox of MatLab and Borland C++ Builder tool were utilized to develop a required control system. An experimental solar dryer, constructed by CONA SOLAR (Austria) was used during the development of the control system. Sensirion sensors were used to characterize the drying air at different positions in the dryer, and also the smart sensor SMART-1 was applied to be able to include the rate of wood water extraction into the control system (the difference of absolute humidity of the air between the outlet and the inlet of solar dryer is considered by SMART-1 to be the extracted water). A comprehensive test over a 3 week period for different fuzzy control models has been performed, and data, obtained from these experiments, were analyzed. Findings from this study would suggest that the developed fuzzy logic-based control system is able to tackle difficulties, related to the control of solar dryer process.
Resumo:
The design of nuclear power plant has to follow a number of regulations aimed at limiting the risks inherent in this type of installation. The goal is to prevent and to limit the consequences of any possible incident that might threaten the public or the environment. To verify that the safety requirements are met a safety assessment process is followed. Safety analysis is as key component of a safety assessment, which incorporates both probabilistic and deterministic approaches. The deterministic approach attempts to ensure that the various situations, and in particular accidents, that are considered to be plausible, have been taken into account, and that the monitoring systems and engineered safety and safeguard systems will be capable of ensuring the safety goals. On the other hand, probabilistic safety analysis tries to demonstrate that the safety requirements are met for potential accidents both within and beyond the design basis, thus identifying vulnerabilities not necessarily accessible through deterministic safety analysis alone. Probabilistic safety assessment (PSA) methodology is widely used in the nuclear industry and is especially effective in comprehensive assessment of the measures needed to prevent accidents with small probability but severe consequences. Still, the trend towards a risk informed regulation (RIR) demanded a more extended use of risk assessment techniques with a significant need to further extend PSA’s scope and quality. Here is where the theory of stimulated dynamics (TSD) intervenes, as it is the mathematical foundation of the integrated safety assessment (ISA) methodology developed by the CSN(Consejo de Seguridad Nuclear) branch of Modelling and Simulation (MOSI). Such methodology attempts to extend classical PSA including accident dynamic analysis, an assessment of the damage associated to the transients and a computation of the damage frequency. The application of this ISA methodology requires a computational framework called SCAIS (Simulation Code System for Integrated Safety Assessment). SCAIS provides accident dynamic analysis support through simulation of nuclear accident sequences and operating procedures. Furthermore, it includes probabilistic quantification of fault trees and sequences; and integration and statistic treatment of risk metrics. SCAIS comprehensively implies an intensive use of code coupling techniques to join typical thermal hydraulic analysis, severe accident and probability calculation codes. The integration of accident simulation in the risk assessment process and thus requiring the use of complex nuclear plant models is what makes it so powerful, yet at the cost of an enormous increase in complexity. As the complexity of the process is primarily focused on such accident simulation codes, the question of whether it is possible to reduce the number of required simulation arises, which will be the focus of the present work. This document presents the work done on the investigation of more efficient techniques applied to the process of risk assessment inside the mentioned ISA methodology. Therefore such techniques will have the primary goal of decreasing the number of simulation needed for an adequate estimation of the damage probability. As the methodology and tools are relatively recent, there is not much work done inside this line of investigation, making it a quite difficult but necessary task, and because of time limitations the scope of the work had to be reduced. Therefore, some assumptions were made to work in simplified scenarios best suited for an initial approximation to the problem. The following section tries to explain in detail the process followed to design and test the developed techniques. Then, the next section introduces the general concepts and formulae of the TSD theory which are at the core of the risk assessment process. Afterwards a description of the simulation framework requirements and design is given. Followed by an introduction to the developed techniques, giving full detail of its mathematical background and its procedures. Later, the test case used is described and result from the application of the techniques is shown. Finally the conclusions are presented and future lines of work are exposed.
Resumo:
Las pruebas de software (Testing) son en la actualidad la técnica más utilizada para la validación y la evaluación de la calidad de un programa. El testing está integrado en todas las metodologías prácticas de desarrollo de software y juega un papel crucial en el éxito de cualquier proyecto de software. Desde las unidades de código más pequeñas a los componentes más complejos, su integración en un sistema de software y su despliegue a producción, todas las piezas de un producto de software deben ser probadas a fondo antes de que el producto de software pueda ser liberado a un entorno de producción. La mayor limitación del testing de software es que continúa siendo un conjunto de tareas manuales, representando una buena parte del coste total de desarrollo. En este escenario, la automatización resulta fundamental para aliviar estos altos costes. La generación automática de casos de pruebas (TCG, del inglés test case generation) es el proceso de generar automáticamente casos de prueba que logren un alto recubrimiento del programa. Entre la gran variedad de enfoques hacia la TCG, esta tesis se centra en un enfoque estructural de caja blanca, y más concretamente en una de las técnicas más utilizadas actualmente, la ejecución simbólica. En ejecución simbólica, el programa bajo pruebas es ejecutado con expresiones simbólicas como argumentos de entrada en lugar de valores concretos. Esta tesis se basa en un marco general para la generación automática de casos de prueba dirigido a programas imperativos orientados a objetos (Java, por ejemplo) y basado en programación lógica con restricciones (CLP, del inglés constraint logic programming). En este marco general, el programa imperativo bajo pruebas es primeramente traducido a un programa CLP equivalente, y luego dicho programa CLP es ejecutado simbólicamente utilizando los mecanismos de evaluación estándar de CLP, extendidos con operaciones especiales para el tratamiento de estructuras de datos dinámicas. Mejorar la escalabilidad y la eficiencia de la ejecución simbólica constituye un reto muy importante. Es bien sabido que la ejecución simbólica resulta impracticable debido al gran número de caminos de ejecución que deben ser explorados y a tamaño de las restricciones que se deben manipular. Además, la generación de casos de prueba mediante ejecución simbólica tiende a producir un número innecesariamente grande de casos de prueba cuando es aplicada a programas de tamaño medio o grande. Las contribuciones de esta tesis pueden ser resumidas como sigue. (1) Se desarrolla un enfoque composicional basado en CLP para la generación de casos de prueba, el cual busca aliviar el problema de la explosión de caminos interprocedimiento analizando de forma separada cada componente (p.ej. método) del programa bajo pruebas, almacenando los resultados y reutilizándolos incrementalmente hasta obtener resultados para el programa completo. También se ha desarrollado un enfoque composicional basado en especialización de programas (evaluación parcial) para la herramienta de ejecución simbólica Symbolic PathFinder (SPF). (2) Se propone una metodología para usar información del consumo de recursos del programa bajo pruebas para guiar la ejecución simbólica hacia aquellas partes del programa que satisfacen una determinada política de recursos, evitando la exploración de aquellas partes del programa que violan dicha política. (3) Se propone una metodología genérica para guiar la ejecución simbólica hacia las partes más interesantes del programa, la cual utiliza abstracciones como generadores de trazas para guiar la ejecución de acuerdo a criterios de selección estructurales. (4) Se propone un nuevo resolutor de restricciones, el cual maneja eficientemente restricciones sobre el uso de la memoria dinámica global (heap) durante ejecución simbólica, el cual mejora considerablemente el rendimiento de la técnica estándar utilizada para este propósito, la \lazy initialization". (5) Todas las técnicas propuestas han sido implementadas en el sistema PET (el enfoque composicional ha sido también implementado en la herramienta SPF). Mediante evaluación experimental se ha confirmado que todas ellas mejoran considerablemente la escalabilidad y eficiencia de la ejecución simbólica y la generación de casos de prueba. ABSTRACT Testing is nowadays the most used technique to validate software and assess its quality. It is integrated into all practical software development methodologies and plays a crucial role towards the success of any software project. From the smallest units of code to the most complex components and their integration into a software system and later deployment; all pieces of a software product must be tested thoroughly before a software product can be released. The main limitation of software testing is that it remains a mostly manual task, representing a large fraction of the total development cost. In this scenario, test automation is paramount to alleviate such high costs. Test case generation (TCG) is the process of automatically generating test inputs that achieve high coverage of the system under test. Among a wide variety of approaches to TCG, this thesis focuses on structural (white-box) TCG, where one of the most successful enabling techniques is symbolic execution. In symbolic execution, the program under test is executed with its input arguments being symbolic expressions rather than concrete values. This thesis relies on a previously developed constraint-based TCG framework for imperative object-oriented programs (e.g., Java), in which the imperative program under test is first translated into an equivalent constraint logic program, and then such translated program is symbolically executed by relying on standard evaluation mechanisms of Constraint Logic Programming (CLP), extended with special treatment for dynamically allocated data structures. Improving the scalability and efficiency of symbolic execution constitutes a major challenge. It is well known that symbolic execution quickly becomes impractical due to the large number of paths that must be explored and the size of the constraints that must be handled. Moreover, symbolic execution-based TCG tends to produce an unnecessarily large number of test cases when applied to medium or large programs. The contributions of this dissertation can be summarized as follows. (1) A compositional approach to CLP-based TCG is developed which overcomes the inter-procedural path explosion by separately analyzing each component (method) in a program under test, stowing the results as method summaries and incrementally reusing them to obtain whole-program results. A similar compositional strategy that relies on program specialization is also developed for the state-of-the-art symbolic execution tool Symbolic PathFinder (SPF). (2) Resource-driven TCG is proposed as a methodology to use resource consumption information to drive symbolic execution towards those parts of the program under test that comply with a user-provided resource policy, avoiding the exploration of those parts of the program that violate such policy. (3) A generic methodology to guide symbolic execution towards the most interesting parts of a program is proposed, which uses abstractions as oracles to steer symbolic execution through those parts of the program under test that interest the programmer/tester most. (4) A new heap-constraint solver is proposed, which efficiently handles heap-related constraints and aliasing of references during symbolic execution and greatly outperforms the state-of-the-art standard technique known as lazy initialization. (5) All techniques above have been implemented in the PET system (and some of them in the SPF tool). Experimental evaluation has confirmed that they considerably help towards a more scalable and efficient symbolic execution and TCG.
Resumo:
ABSTRACT: The comparison of the different bids in the tender for a project, with the traditional contract system based on unit rates open to and re-measurement, requires analysis tools that are able to discriminate proposals having a similar overall economic impact, but that might show a very different behaviour during the execution of the works. RESUMEN: La estimación rápida de costes en fases iniciales del proyecto por métodos paramétricos y referencias estadísticas es un tema bien estudiado, divulgado y aplicado en el sector de la construcción. Sin embargo, existe poca literatura técnica sobre sistemas de predimensionado de tiempos, que permitan realizar rápidamente una planificación con un grado de aproximación razonable. Este texto reúne dos aspectos ya conocidos, pero hasta ahora independientes, y una aportación propia: -La estimación del plazo final por referencias estadísticas (BCIS, 2000) - La estimación del reparto del coste total a lo largo de la ejecución mediante curvas "S" (diversos autores) La estimación de la duración de la ejecución de las actividades en función de su coste. El conjunto de estas tres técnicas, aplicadas a un proyecto, permite obtener una planificación con el suficiente grado de detalle y fiabilidad para tomar decisiones en fases iniciales del proyecto.
Resumo:
La presente Tesis persigue la definición y el desarrollo de un sistema basado en el conocimiento que permita la generación de modelos de líneas de montaje durante la fase conceptual de definición de una aeroestructura aeronáutica. Para ello, se propone la definición de un modelo formal del proceso en concurrencia asociado al diseño de líneas de montaje en la fase conceptual, y de un modelo de la estructura de datos básica para soportar dicho proceso. Ambos modelos sirven de base para el desarrollo de una aplicación de prueba de concepto en el entorno del sistema comercial CAX-PLM CATIA v5. Los modelos de línea generados integran las tres estructuras básicas definidas en el modelo propuesto: producto, procesos y recursos. Los modelos generados son estructuras “de montaje”, basadas en estructuras de producto “de fabricación” a su vez derivadas de estructuras “de diseño”. Cada modelo generado se evalúa en términos de cuatro estimaciones básicas: dimensiones máximas del nodo producto, distancia de transporte y medio a utilizar, tiempo total de ejecución y coste total. La generación de modelos de línea de montaje se realiza en concurrencia con la función diseño de producto, teniendo por tanto la oportunidad de influir en la misma e incluir requerimientos de fabricación y montaje al producto en las primeras fases de su ciclo de vida, lo que proporciona una clara ventaja competitiva. El desarrollo propuesto en esta Tesis permite sentar las bases para realizar desarrollos con objeto de asistir a los diseñadores durante la fase conceptual de generación de diseños de líneas de montaje. La aplicación prototipo desarrollada demuestra la viabilidad de la propuesta conceptual que se realiza en la Tesis. ABSTRACT The current thesis proposes the definition and development of a knowledge-based system to generate aircraft components assembly line models during the conceptual phase of the product life cycle. With this objective, the definition of a formal activity model to represent the design of assembly lines during the conceptual phase is proposed; such model considers the concurrence with the product design process. Associated to the activity model, a data structure model is defined to support such process. Both models are the basis for the development of a proof of concept application within the environment of the commercial CAX-PLM system CATIA v5. The generated assembly line models integrate the three basic structures defined in the proposed model: product, processes and resources. The generated models are “As Prepared” structures based on “As Planned” structures derived from “As Designed” structures. Each generated model is evaluated in terms of four basic estimates: maximum dimensions of the product node, transport distance and transport mean to be used, total execution time and total cost. The assembly line models generation is made in concurrence with the product design function. Therefore, it provides the opportunity to influence on it and allows including manufacturing and assembly requirements early in the product life cycle, which gives a clear competitive advantage. The development proposed in this Thesis allows setting the foundation to carry out further developments with the aim of assisting designers during the conceptual phase of the assembly line design process. The developed prototype application shows the feasibility of the conceptual proposal presented in the Thesis.