997 resultados para Modular Framework
Resumo:
Outlier detection is an important form of data analysis because outliers in several cases contain the interesting and important pieces of information. In the recent years, many different outlier detection algorithms have been devised for finding different kinds of outliers in varying contexts and environments. Some effort has been put to study how to effectively combine different outlier detection methods. The combination of outlier detection algorithms as an ensemble was studied in this thesis by designing a modular framework for outlier detection, which combines arbitrary outlier detection techniques. This work resulted in an example implementation of the framework. Outlier detection capability of the ensemble method was validated using datasets and methods found in outlier detection research. The framework achieved better results than the individual outlier algorithms. Future research includes how to handle large datasets effectively and the possibilities for real-time outlier monitoring.
Resumo:
According to Diener (1984), the three primary components of subjective well-being (SWB) are high life satisfaction (LS), frequent positive affect (P A), and infrequent negative affect (NA). The present dissertation extends previous research and theorizing on SWB by testing an innovative framework developed by Shmotkin (2005) in which SWB is conceptualized as an agentic process that promotes and maintains positive functioning. Two key components ofShmotkin's framework were explored in a longitudinal study of university students. In Part 1, SWB was examined as an integrated system of components organized within individuals. Using cluster analysis, five distinct configurations of LS, P A, and NA were identified at each wave. Individuals' SWB configurations were moderately stable over time, with the highest and lowest stabilities observed among participants characterized by "high SWB" and "low SWB" configurations, respectively. Changes in SWB configurations in the direction of a high SWB pattern, and stability among participants already characterized by high SWB, coincided with better than expected mental, physical, and interpersonal functioning over time. More positive levels of functioning and improvements in functioning over time discriminated among SWB configurations. However, prospective effects of SWB configurations on subsequent functioning were not observed. In Part 2, subjective temporal perspective "trajectories" were examined based on individuals' ratings of their past, present, and anticipated future LS. Upward subjective LS trajectories were normative at each wave. Cross-sectional analyses revealed consistent associations between upward subjective trajectories and lower levels of LS, as well as less positive mental, physical, and interpersonal functioning. Upward subjective LS trajectories were biased both with respect to underestimation of past LS and overestimation of future LS, demonstrating their illusional nature. Further, whereas more negative retrospective bias was associated with greater current distress and dysfunction, more positive prospective bias was associated with less positive functioning in the future. Prospective relations, however, were not consistently observed. Thus, steep upward subjective LS trajectory appeared to be a form of wishful-thinking, rather than an adaptive form of selfenhancement. Major limitations and important directions for future research are considered. Implications for Shmotkin's (2005) framework, and for research on SWB more generally, also are discussed
Resumo:
A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available.
Resumo:
BACKGROUND The recent occurrence and spread of African swine fever (ASF) in Eastern Europe is perceived as a serious risk for the pig industry in the European Union (EU). In order to estimate the potential risk of ASF virus (ASFV) entering the EU, several pathways of introduction were previously assessed separately. The present work aimed to integrate five of these assessments (legal imports of pigs, legal imports of products, illegal imports of products, fomites associated with transport and wild boar movements) into a modular tool that facilitates the visualization and comprehension of the relative risk of ASFV introduction into the EU by each analyzed pathway. RESULTS The framework's results indicate that 48% of EU countries are at relatively high risk (risk score 4 or 5 out of 5) for ASFV entry for at least one analyzed pathway. Four of these countries obtained the maximum risk score for one pathway: Bulgaria for legally imported products during the high risk period (HRP); Finland for wild boar; Slovenia and Sweden for legally imported pigs during the HRP. Distribution of risk considerably differed from one pathway to another; for some pathways, the risk was concentrated in a few countries (e.g., transport fomites), whereas other pathways incurred a high risk for 4 or 5 countries (legal pigs, illegal imports and wild boar). CONCLUSIONS The modular framework, developed to estimate the risk of ASFV entry into the EU, is available in a public domain, and is a transparent, easy-to-interpret tool that can be updated and adapted if required. The model's results determine the EU countries at higher risk for each ASFV introduction route, and provide a useful basis to develop a global coordinated program to improve ASFV prevention in the EU.
Resumo:
Os videojogos são cada vez mais uma das maiores áreas da indústria de entretenimento, tendo esta vindo a expandir-se de ano para ano. Para além disso, os videojogos estão cada vez mais presentes no nosso dia-adia, quer através dos dispositivos móveis ou das novas consolas. Com base nesta premissa, é seguro de afirmar que o investimento neste campo trará mais ganhos do que perdas. Esta Dissertação tem como objetivo o estudo do estado da indústria dos videojogos, tendo como principal foco a conceção de um videojogo, a partir duma Framework Modular, desenvolvida também no âmbito desta Dissertação. Para isso, é feito um estudo sobre o estado da arte tecnológico, onde várias ferramentas de criação de videojogos foram estudadas e analisadas, de forma a perceber as forças e fraquezas de cada uma, e um estudo sobre a arte do negócio, ficando assim com uma ideia mais concreta dos vários pontos necessários para a criação de um videojogo. De seguida são discutidos os diferentes géneros de videojogos existentes e é conceptualizado um pequeno videojogo, tendo ainda em conta os diferentes tipos de interfaces que são mais utilizados na indústria dos videojogos, de forma a entender qual será a forma mais viável, conforme o género, e as diferentes mecânicas presentes no videojogo a criar. A Framework Modular é desenvolvida tendo em conta toda a análise previamente realizada, e o videojogo conceptualizado. Esta tem como grande objetivo uma elevada personalização e manutenibilidade, sendo que todos os módulos implementados podem ser substituídos por outros sem criar conflitos entre si. Finalmente, de forma a unir todos os temas analisados ao longo desta Dissertação, é ainda desenvolvido um Protótipo de forma a comprovar o bom funcionamento da Framework, aplicando todas as decisões previamente feitas.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
Dissertação de Mestrado em Engenharia Informática 2º Semestre, 2011/2012
Resumo:
The transdiagnostic approach to eating disorders has led to significant benefit for the research and treatment of "eating disorder not otherwise specified" (EDNOS). There is currently almost no research on "anxiety disorder not otherwise specified (ADNOS)." This case report describes a transdiagnostic approach to the treatment of ADNOS, using a modular framework. The treatment was successful in the short term but not in the longer term. It is concluded that increasing the evidence base for transdiagnostic treatment of anxiety disorders is a clinical and research priority.
Resumo:
Resource analysis aims at inferring the cost of executing programs for any possible input, in terms of a given resource, such as the traditional execution steps, time ormemory, and, more recently energy consumption or user defined resources (e.g., number of bits sent over a socket, number of database accesses, number of calls to particular procedures, etc.). This is performed statically, i.e., without actually running the programs. Resource usage information is useful for a variety of optimization and verification applications, as well as for guiding software design. For example, programmers can use such information to choose different algorithmic solutions to a problem; program transformation systems can use cost information to choose between alternative transformations; parallelizing compilers can use cost estimates for granularity control, which tries to balance the overheads of task creation and manipulation against the benefits of parallelization. In this thesis we have significatively improved an existing prototype implementation for resource usage analysis based on abstract interpretation, addressing a number of relevant challenges and overcoming many limitations it presented. The goal of that prototype was to show the viability of casting the resource analysis as an abstract domain, and howit could overcome important limitations of the state-of-the-art resource usage analysis tools. For this purpose, it was implemented as an abstract domain in the abstract interpretation framework of the CiaoPP system, PLAI.We have improved both the design and implementation of the prototype, for eventually allowing an evolution of the tool to the industrial application level. The abstract operations of such tool heavily depend on the setting up and finding closed-form solutions of recurrence relations representing the resource usage behavior of program components and the whole program as well. While there exist many tools, such as Computer Algebra Systems (CAS) and libraries able to find closed-form solutions for some types of recurrences, none of them alone is able to handle all the types of recurrences arising during program analysis. In addition, there are some types of recurrences that cannot be solved by any existing tool. This clearly constitutes a bottleneck for this kind of resource usage analysis. Thus, one of the major challenges we have addressed in this thesis is the design and development of a novel modular framework for solving recurrence relations, able to combine and take advantage of the results of existing solvers. Additionally, we have developed and integrated into our novel solver a technique for finding upper-bound closed-form solutions of a special class of recurrence relations that arise during the analysis of programs with accumulating parameters. Finally, we have integrated the improved resource analysis into the CiaoPP general framework for resource usage verification, and specialized the framework for verifying energy consumption specifications of embedded imperative programs in a real application, showing the usefulness and practicality of the resulting tool.---ABSTRACT---El Análisis de recursos tiene como objetivo inferir el coste de la ejecución de programas para cualquier entrada posible, en términos de algún recurso determinado, como pasos de ejecución, tiempo o memoria, y, más recientemente, el consumo de energía o recursos definidos por el usuario (por ejemplo, número de bits enviados a través de un socket, el número de accesos a una base de datos, cantidad de llamadas a determinados procedimientos, etc.). Ello se realiza estáticamente, es decir, sin necesidad de ejecutar los programas. La información sobre el uso de recursos resulta muy útil para una gran variedad de aplicaciones de optimización y verificación de programas, así como para asistir en el diseño de los mismos. Por ejemplo, los programadores pueden utilizar dicha información para elegir diferentes soluciones algorítmicas a un problema; los sistemas de transformación de programas pueden utilizar la información de coste para elegir entre transformaciones alternativas; los compiladores paralelizantes pueden utilizar las estimaciones de coste para realizar control de granularidad, el cual trata de equilibrar el coste debido a la creación y gestión de tareas, con los beneficios de la paralelización. En esta tesis hemos mejorado de manera significativa la implementación de un prototipo existente para el análisis del uso de recursos basado en interpretación abstracta, abordando diversos desafíos relevantes y superando numerosas limitaciones que éste presentaba. El objetivo de dicho prototipo era mostrar la viabilidad de definir el análisis de recursos como un dominio abstracto, y cómo se podían superar las limitaciones de otras herramientas similares que constituyen el estado del arte. Para ello, se implementó como un dominio abstracto en el marco de interpretación abstracta presente en el sistema CiaoPP, PLAI. Hemos mejorado tanto el diseño como la implementación del mencionado prototipo para posibilitar su evolución hacia una herramienta utilizable en el ámbito industrial. Las operaciones abstractas de dicha herramienta dependen en gran medida de la generación, y posterior búsqueda de soluciones en forma cerrada, de relaciones recurrentes, las cuales modelizan el comportamiento, respecto al consumo de recursos, de los componentes del programa y del programa completo. Si bien existen actualmente muchas herramientas capaces de encontrar soluciones en forma cerrada para ciertos tipos de recurrencias, tales como Sistemas de Computación Algebraicos (CAS) y librerías de programación, ninguna de dichas herramientas es capaz de tratar, por sí sola, todos los tipos de recurrencias que surgen durante el análisis de recursos. Existen incluso recurrencias que no las puede resolver ninguna herramienta actual. Esto constituye claramente un cuello de botella para este tipo de análisis del uso de recursos. Por lo tanto, uno de los principales desafíos que hemos abordado en esta tesis es el diseño y desarrollo de un novedoso marco modular para la resolución de relaciones recurrentes, combinando y aprovechando los resultados de resolutores existentes. Además de ello, hemos desarrollado e integrado en nuestro nuevo resolutor una técnica para la obtención de cotas superiores en forma cerrada de una clase característica de relaciones recurrentes que surgen durante el análisis de programas lógicos con parámetros de acumulación. Finalmente, hemos integrado el nuevo análisis de recursos con el marco general para verificación de recursos de CiaoPP, y hemos instanciado dicho marco para la verificación de especificaciones sobre el consumo de energía de programas imperativas embarcados, mostrando la viabilidad y utilidad de la herramienta resultante en una aplicación real.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Cost estimation is an important, but challenging process when designing a new product or a feature of it, verifying the product prices given by suppliers or planning a cost saving actions of existing products. It is even more challenging when the product is highly modular, not a bulk product. In general, cost estimation techniques can be divided into two main groups - qualitative and quantitative techniques - which can further be classified into more detailed methods. Generally, qualitative techniques are preferable when comparing alternatives and quantitative techniques when cost relationships can be found. The main objective of this thesis was to develop a method on how to estimate costs of internally manufactured and commercial elevator landing doors. Because of the challenging product structure, the proposed cost estimation framework is developed under three different levels based on past cost information available. The framework consists of features from both qualitative and quantitative cost estimation techniques. The starting point for the whole cost estimation process is an unambiguous, hierarchical product structure so that the product can be classified into controllable parts and is then easier to handle. Those controllable parts can then be compared to existing past cost knowledge of similar parts and create as accurate cost estimates as possible by that way.
Resumo:
Recently, Small Modular Reactors (SMRs) have attracted increased public discussion. While large nuclear power plant new build projects are facing challenges, the focus of attention is turning to small modular reactors. One particular project challenge arises in the area of nuclear licensing, which plays a significant role in new build projects affecting their quality as well as costs and schedules. This dissertation - positioned in the field of nuclear engineering but also with a significant section in the field of systems engineering - examines the nuclear licensing processes and their suitability for the characteristics of SMRs. The study investigates the licensing processes in selected countries, as well as other safety critical industry fields. Viewing the licensing processes and their separate licensing steps in terms of SMRs, the study adopts two different analysis theories for review and comparison. The primary data consists of a literature review, semi-structured interviews, and questionnaire responses concerning licensing processes and practices. The result of the study is a recommendation for a new, optimized licensing process for SMRs. The most important SMR-specific feature, in terms of licensing, is the modularity of the design. Here the modularity indicates multi-module SMR designs, which creates new challenges in the licensing process. As this study focuses on Finland, the main features of the new licensing process are adapted to the current Finnish licensing process, aiming to achieve the main benefits with minimal modifications to the current process. The application of the new licensing process is developed using Systems Engineering, Requirements Management, and Project Management practices and tools. Nuclear licensing includes a large amount of data and documentation which needs to be managed in a suitable manner throughout the new build project and then during the whole life cycle of the nuclear power plant. To enable a smooth licensing process and therefore ensure the success of the new build nuclear power plant project, management processes and practices play a significant role. This study contributes to the theoretical understanding of how licensing processes are structured and how they are put into action in practice. The findings clarify the suitability of different licensing processes and their selected licensing steps for SMR licensing. The results combine the most suitable licensing steps into a new licensing process for SMRs. The results are also extended to the concept of licensing management practices and tools.
Resumo:
In a world where massive amounts of data are recorded on a large scale we need data mining technologies to gain knowledge from the data in a reasonable time. The Top Down Induction of Decision Trees (TDIDT) algorithm is a very widely used technology to predict the classification of newly recorded data. However alternative technologies have been derived that often produce better rules but do not scale well on large datasets. Such an alternative to TDIDT is the PrismTCS algorithm. PrismTCS performs particularly well on noisy data but does not scale well on large datasets. In this paper we introduce Prism and investigate its scaling behaviour. We describe how we improved the scalability of the serial version of Prism and investigate its limitations. We then describe our work to overcome these limitations by developing a framework to parallelise algorithms of the Prism family and similar algorithms. We also present the scale up results of a first prototype implementation.
Resumo:
Context-sensitive analysis provides information which is potentially more accurate than that provided by context-free analysis. Such information can then be applied in order to validate/debug the program and/or to specialize the program obtaining important improvements. Unfortunately, context-sensitive analysis of modular programs poses important theoretical and practical problems. One solution, used in several proposals, is to resort to context-free analysis. Other proposals do address context-sensitive analysis, but are only applicable when the description domain used satisfies rather restrictive properties. In this paper, we argüe that a general framework for context-sensitive analysis of modular programs, Le., one that allows using all the domains which have proved useful in practice in the non-modular setting, is indeed feasible and very useful. Driven by our experience in the design and implementation of analysis and specialization techniques in the context of CiaoPP, the Ciao system preprocessor, in this paper we discuss a number of design goals for context-sensitive analysis of modular programs as well as the problems which arise in trying to meet these goals. We also provide a high-level description of a framework for analysis of modular programs which does substantially meet these objectives. This framework is generic in that it can be instantiated in different ways in order to adapt to different contexts. Finally, the behavior of the different instantiations w.r.t. the design goals that motivate our work is also discussed.
Resumo:
The optimization of chemical processes where the flowsheet topology is not kept fixed is a challenging discrete-continuous optimization problem. Usually, this task has been performed through equation based models. This approach presents several problems, as tedious and complicated component properties estimation or the handling of huge problems (with thousands of equations and variables). We propose a GDP approach as an alternative to the MINLP models coupled with a flowsheet program. The novelty of this approach relies on using a commercial modular process simulator where the superstructure is drawn directly on the graphical use interface of the simulator. This methodology takes advantage of modular process simulators (specially tailored numerical methods, reliability, and robustness) and the flexibility of the GDP formulation for the modeling and solution. The optimization tool proposed is successfully applied to the synthesis of a methanol plant where different alternatives are available for the streams, equipment and process conditions.