972 resultados para Manufacturing Execution Systems
Resumo:
The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry.
Resumo:
Systems Biology is an innovative way of doing biology recently raised in bio-informatics contexts, characterised by the study of biological systems as complex systems with a strong focus on the system level and on the interaction dimension. In other words, the objective is to understand biological systems as a whole, putting on the foreground not only the study of the individual parts as standalone parts, but also of their interaction and of the global properties that emerge at the system level by means of the interaction among the parts. This thesis focuses on the adoption of multi-agent systems (MAS) as a suitable paradigm for Systems Biology, for developing models and simulation of complex biological systems. Multi-agent system have been recently introduced in informatics context as a suitabe paradigm for modelling and engineering complex systems. Roughly speaking, a MAS can be conceived as a set of autonomous and interacting entities, called agents, situated in some kind of nvironment, where they fruitfully interact and coordinate so as to obtain a coherent global system behaviour. The claim of this work is that the general properties of MAS make them an effective approach for modelling and building simulations of complex biological systems, following the methodological principles identified by Systems Biology. In particular, the thesis focuses on cell populations as biological systems. In order to support the claim, the thesis introduces and describes (i) a MAS-based model conceived for modelling the dynamics of systems of cells interacting inside cell environment called niches. (ii) a computational tool, developed for implementing the models and executing the simulations. The tool is meant to work as a kind of virtual laboratory, on top of which kinds of virtual experiments can be performed, characterised by the definition and execution of specific models implemented as MASs, so as to support the validation, falsification and improvement of the models through the observation and analysis of the simulations. A hematopoietic stem cell system is taken as reference case study for formulating a specific model and executing virtual experiments.
Resumo:
The inter-American human rights system has been conceived following the example of the European system under the European Convention on Human Rights (ECHR) before it was modified by Protocol No 11. However, two important differences exist. First, the authority of the European Court of Human Rights (ECtHR) to order reparation has been strictly limited by the principle of subsidiarity. Thus, the ECtHR's main function is to determine whether the ECHR has been violated. Beyond the declaratory effect of its judgments, according to Article 41 ECHR, it may only "afford just satisfaction to the injured party". The powers of the Inter-American Court of Human Rights (IACtHR) were conceived in a much broader fashion in Article 63 of the American Convention on Human Rights (ACHR), giving the Court the authority to order a variety of individual and general measures aimed at obtaining restitutio in integrum. The first main part of this thesis shows how both Courts have developed their reparation practice and examines the advantages and disadvantages of each approach. Secondly, the ECtHR's rather limited reparation powers have, interestingly, been combined with an elaborate implementation system that includes several of the Council of Europe's organs, principally the Committee of Ministers. In the Inter-American System, no dedicated mechanism was implemented to oversee compliance with the IACtHR's judgments. The ACHR limits itself to inviting the Court to point out in its annual reports the cases that have not been complied with and to propose measures to be adopted by the General Assembly of the Organization of American States. The General Assembly, however, hardly ever took action. The IACtHR has therefore filled this gap by developing a proper procedure to oversee compliance with its judgments. Both the European and the American solutions to ensure compliance are presented and compared in the second main part of this thesis. Finally, based on the results of both main parts, a comparative analysis of the reparation practice and the execution results in both human rights systems is being provided, aimed at developing proposals for the improvement of the functioning of either human rights protection system.
Resumo:
In this project, I examine current forms of scientific management systems, Lean and Six Sigma, as they relate to technical communication. With the goal of breaking work up into standardized processes in order to cut costs and increase efficiency, Lean, Six Sigma and Lean Six Sigma hybrid systems are increasingly applied beyond manufacturing operations to service and other types of organizational work, including technical communication. By consulting scholarship from fields such as business, management, and engineering, and analyzing government Lean Six Sigma documentation, I investigate how these systems influence technical communication knowledge and practice in the workplace. I draw out the consequences of system-generated power structures as they affect knowledge work, like technical communication practice, when it is reduced to process. In pointing out the problems these systems have in managing knowledge work, I also ask how technical communication might shape them.
Resumo:
Waste effluents from the forest products industry are sources of lignocellulosic biomass that can be converted to ethanol by yeast after pretreatment. However, the challenge of improving ethanol yields from a mixed pentose and hexose fermentation of a potentially inhibitory hydrolysate still remains. Hardboard manufacturing process wastewater (HPW) was evaluated at a potential feedstream for lignocellulosic ethanol production by native xylose-fermenting yeast. After screening of xylose-fermenting yeasts, Scheffersomyces stipitis CBS 6054 was selected as the ideal organism for conversion of the HPW hydrolysate material. The individual and synergistic effects of inhibitory compounds present in the hydrolysate were evaluated using response surface methodology. It was concluded that organic acids have an additive negative effect on fermentations. Fermentation conditions were also optimized in terms of aeration and pH. Methods for improving productivity and achieving higher ethanol yields were investigated. Adaptation to the conditions present in the hydrolysate through repeated cell sub-culturing was used. The objectives of this present study were to adapt S. stipitis CBS6054 to a dilute-acid pretreated lignocellulosic containing waste stream; compare the physiological, metabolic, and proteomic profiles of the adapted strain to its parent; quantify changes in protein expression/regulation, metabolite abundance, and enzyme activity; and determine the biochemical and molecular mechanism of adaptation. The adapted culture showed improvement in both substrate utilization and ethanol yields compared to the unadapted parent strain. The adapted strain also represented a growth phenotype compared to its unadapted parent based on its physiological and proteomic profiles. Several potential targets that could be responsible for strain improvement were identified. These targets could have implications for metabolic engineering of strains for improved ethanol production from lignocellulosic feedstocks. Although this work focuses specifically on the conversion of HPW to ethanol, the methods developed can be used for any feedstock/product systems that employ a microbial conversion step. The benefit of this research is that the organisms will the optimized for a company's specific system.
Resumo:
This thesis is composed of three life-cycle analysis (LCA) studies of manufacturing to determine cumulative energy demand (CED) and greenhouse gas emissions (GHG). The methods proposed could reduce the environmental impact by reducing the CED in three manufacturing processes. First, industrial symbiosis is proposed and a LCA is performed on both conventional 1 GW-scaled hydrogenated amorphous silicon (a-Si:H)-based single junction and a-Si:H/microcrystalline-Si:H tandem cell solar PV manufacturing plants and such plants coupled to silane recycling plants. Using a recycling process that results in a silane loss of only 17 versus 85 percent, this results in a CED savings of 81,700 GJ and 290,000 GJ per year for single and tandem junction plants, respectively. This recycling process reduces the cost of raw silane by 68 percent, or approximately $22.6 and $79 million per year for a single and tandem 1 GW PV production facility, respectively. The results show environmental benefits of silane recycling centered around a-Si:H-based PV manufacturing plants. Second, an open-source self-replicating rapid prototype or 3-D printer, the RepRap, has the potential to reduce the environmental impact of manufacturing of polymer-based products, using distributed manufacturing paradigm, which is further minimized by the use of PV and improvements in PV manufacturing. Using 3-D printers for manufacturing provides the ability to ultra-customize products and to change fill composition, which increases material efficiency. An LCA was performed on three polymer-based products to determine the CED and GHG from conventional large-scale production and are compared to experimental measurements on a RepRap producing identical products with ABS and PLA. The results of this LCA study indicate that the CED of manufacturing polymer products can possibly be reduced using distributed manufacturing with existing 3-D printers under 89% fill and reduced even further with a solar photovoltaic system. The results indicate that the ability of RepRaps to vary fill has the potential to diminish environmental impact on many products. Third, one additional way to improve the environmental performance of this distributed manufacturing system is to create the polymer filament feedstock for 3-D printers using post-consumer plastic bottles. An LCA was performed on the recycling of high density polyethylene (HDPE) using the RecycleBot. The results of the LCA showed that distributed recycling has a lower CED than the best-case scenario used for centralized recycling. If this process is applied to the HDPE currently recycled in the U.S., more than 100 million MJ of energy could be conserved per annum along with significant reductions in GHG. This presents a novel path to a future of distributed manufacturing suited for both the developed and developing world with reduced environmental impact. From improving manufacturing in the photovoltaic industry with the use of recycling to recycling and manufacturing plastic products within our own homes, each step reduces the impact on the environment. The three coupled projects presented here show a clear potential to reduce the environmental impact of manufacturing and other processes by implementing complimenting systems, which have environmental benefits of their own in order to achieve a compounding effect of reduced CED and GHG.
Resumo:
Business strategy is important to all organizations. Nearly all Fortune 500 firms are implementing Enterprise Resource Planning (ERP) systems to improve the execution of their business strategy and to improve integration with its information technology (IT) strategy. Successful implementation of these multi-million dollar software systems are requiring new emphasis on change management and on Business and IT strategic alignment. This paper examines business and IT strategic alignment and seeks to explore whether an ERP implementation can drive business process reengineering and business and IT strategic alignment. An overview of business strategy and strategic alignment are followed by an analysis of ERP. The As-Is/To-Be process model is then presented and explained as a simple, but vital tool for improving business strategy, strategic alignment, and ERP implementation success.
Resumo:
Writing unit tests for legacy systems is a key maintenance task. When writing tests for object-oriented programs, objects need to be set up and the expected effects of executing the unit under test need to be verified. If developers lack internal knowledge of a system, the task of writing tests is non-trivial. To address this problem, we propose an approach that exposes side effects detected in example runs of the system and uses these side effects to guide the developer when writing tests. We introduce a visualization called Test Blueprint, through which we identify what the required fixture is and what assertions are needed to verify the correct behavior of a unit under test. The dynamic analysis technique that underlies our approach is based on both tracing method executions and on tracking the flow of objects at runtime. To demonstrate the usefulness of our approach we present results from two case studies.
Resumo:
Rapid Manufacturing (RM) wurde als Schlagwort in der letzten Zeit insbesondere aus dem Bereich des Selective Laser Sintering (SLS) bekannt. In dieser inzwischen ber 15-jhrigen Technologieentwicklung wurden in den vergangenen Jahren bedeutende Fortschritte erzielt, die die Bauteileigenschaften nahe an die Anforderungen fr End-Teile heran brachten. So ist das RM denn auch weniger aus der Sicht grsserer Losgrsse zu verstehen. Viel mehr bedeutet Rapid Manufacturing, dass die Bauteile nach einer generativen Fertigung direkt im Endprodukt resp. der Endanwendung zum Einsatz kommt. Das Selective Laser Melting, mit welchem aus metallischen Pulvermaterialien direkt Metallteile in Standardmaterialien hergestellt werden knnen, ist aufgrund der guten Materialeigenschaften fr RM prdestiniert. In den ersten Anwendungsfeldern des SLMVerfahrens standen die Herstellung von Werkzeugeinstzen mit konturnaher Khlung (Conformal Cooling) im Vordergrund, wobei diese Werkzeuge unter dem Begriff RM verstanden werden mssen, da die Werkzeuge direkt fr die Endanwendung - den Spritzgussprozess - verwendet werden. Aktuelle Trends gehen jedoch in Richtung der Fertigung von Funktionsteilen z.B. fr den Maschinenbau. Obwohl sich in der Fertigung komplexer Funktionsteile noch Probleme, z.B. mit in Bezug auf die generative Baurichtung berhngender Bauteilstrukturen ergeben, zeigen sich trotzdem erhebliche Vorteile eines RM mittels SLM. Neben klaren Vorteilen durch das mgliche Customizing von Bauteilen knnen bei kleineren Bauteilgrssen auch erhebliche Kostenvorteile erzielt werden. Allerdings zeigen die Grenzen der aktuellen Mglichkeiten, in welchen Bereichen das SLM-Verfahren weiterer Entwicklung bedarf. Themen wie Produktivitt, die Problematik der nach wie vor notwendigen Supportstrukturen wie auch Qualittssicherung mssen in den nchsten Jahren angegangen werden, wenn dieses Verfahren den Schritt hin zu einem etablierten Produktionsverfahren und damit zu breiterer Akzeptanz und Anwendung finden soll
Resumo:
Heutzutage stehen zunehmend z.B. durch den raschen Fortschritt bei den bildgebenden Verfahren digitale Datenstze im Dentalbereich zur Verfgung. CAD/CAM-syteme gehren dabei in der Zahntechnik lngst zum Stande der Technik. Fr die Anwendung derartiger Systeme ist jedoch ein Gipsmodell ntig, welches zum Beginn der Prozesskette vom Zahntechniker mittels eines optischen Scanners digitalisiert wird. Die Weiterentwicklung intraoraler Scanner ermglicht heutzutage auerdem die Digitalisierung ganzer Kiefer im Patientenmund durch den Zahnarzt. Insbesondere fr z.B. die sthetischen Restaurationen bildet hier das zahntechnische Modell nach wie vor die unersetzliche Arbeitsgrundlage fr den Techniker. In der vorliegenden Arbeit wird dazu ein Rapid Manufacturing Verfahren zur Herstellung von Dentalmodellen auf Basis der Stereolithographie vorgestellt. Dabei wird auf die besonderen Anforderungen hinsichtlich Przision, Robustheit und Wirtschaftlichkeit von generativen Fertigungsverfahren fr dentale Applikationen eingegangen und eine neu entwickelte Baustrategie vorgestellt, mittels derer die o.g. Anforderungen erfllt werden
Resumo:
While revenue management (RM) is traditionally considered a tool of service operations, RM shows considerable potential for application in manufacturing operations. The typical challenges in make-to-order manufacturing are fixed manufacturing capacities and a great variety in offered products, going along with pronounced fluctuations in demand and profitability. Since Harris and Pinder in the mid-90s, numerous papers have furthered the understanding of RM theory in this environment. Nevertheless, results to be expected from applying the developed methods to a practical industry setting have yet to be reported. To this end, this paper investigates a possible application of RM at ThyssenKrupp VDM, leading to considerable improvements in several areas.
Resumo:
Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or signicant nancial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Javas popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specication for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) prole was developed to dene a robust subset of the language that is amenable to static analysis for high-integrity system certication. Currently, a specication under the Java community process (JSR- 302) is being developed. Its purpose is to dene those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its proles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to dene appropriate abstractions to overcome this problem. Currently there is no formal specication. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ prole. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the denition of a computational model which identies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by dening two phases and a specic threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol dening a network interface and modules. The JRMP protocol was modied to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez ms importantes para la sociedad. Su demanda aumenta y cada vez ms dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar prdida de vidas humanas, daos en el medio ambiente o cuantiosas prdidas econmicas. La necesidad de satisfacer requisitos temporales estrictos, hace ms complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso tcnicas adecuadas en su diseo, mantenimiento y certicacin. En concreto, se requiere una tecnologa exible e independiente del hardware. La evolucin de las redes y paradigmas de comunicacin, as como la necesidad de mayor potencia de cmputo y de tolerancia a fallos, ha motivado la interconexin de dispositivos electrnicos. Los mecanismos de comunicacin permiten la transferencia de datos con alta velocidad de transmisin. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecucin. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseo. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar donde se ejecutan puede variar. El lenguaje de programacin Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specication for Java), que es una extensin del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitacin importante dado que la mayora de los actuales y futuros sistemas sern distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el n de denir las abstracciones que aborden dicha limitacin, pero en la actualidad aun no existe una especicacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integracin entre el modelo de RMI (Remote Method Invocation) y el perl HRTJ. Ha sido diseado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la conabilidad del comportamiento temporal y el uso de recursos. El diseo parte de la denicin de un modelo computacional el cual identica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes ms adecuados, el modelo de anlisis, y un subconjunto de Java para sistemas de tiempo real crtico. En el diseo, las referencias remotas son el medio bsico para construccin de aplicaciones distribuidas las cuales son asociadas a todos los parmetros no funcionales y los recursos necesarios para la ejecucin de invocaciones remotas sncronas o asncronas con atributos de tiempo real. El middleware propuesto separa la asignacin de recursos de la propia ejecucin deniendo dos fases y un mecanismo de hebras especico que garantiza un comportamiento temporal adecuado. Adems se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red deniendo una interfaz de red y mdulos especcos. Tambin se ha modicado el protocolo JRMP para incluir diferentes fases, parmetros no funcionales y optimizaciones de los tamaos de los mensajes. Aunque la serializacin es una de las operaciones fundamentales para asegurar la adecuada transmisin de datos, las actuales implementaciones no son adecuadas para sistemas crticos y no hay alternativas. Este trabajo propone una serializacin predecible que ha implicado el desarrollo de un nuevo compilador para la generacin de cdigo optimizado acorde al modelo computacional. La solucin propuesta tiene la ventaja que en tiempo de compilacin nos permite planicar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseo e implementacin se ha llevado a cabo un exigente proceso de validacin con nfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicacin industrial desarrollada por Thales Avionics (un sistema de gestin de vuelo) y en las pruebas exhaustivas han demostrado que el diseo y el prototipo son ables para aplicaciones industriales con estrictos requisitos temporales.
Resumo:
Nowadays computing platforms consist of a very large number of components that require to be supplied with diferent voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty diferent loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets. The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy eficiency and cost targets. The design dificulties of selecting the right solution arise due to the wide range of power conversion products provided by diferent manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ diferent manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis. In this thesis, in order to overcome the mentioned dificulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application. The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.
Resumo:
Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their high-level nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic programs interesting candidates for obtaining speedups through parallel execution. At the same time, the fact that the typical applications of logic programming frequently involve irregular computations, make heavy use of dynamic data structures with logical variables, and involve search and speculation, makes the techniques used in the corresponding parallelizing compilers and run-time systems potentially interesting even outside the field. The objective of this article is to provide a comprehensive survey of the issues arising in parallel execution of logic programming languages along with the most relevant approaches explored to date in the field. Focus is mostly given to the challenges emerging from the parallel execution of Prolog programs. The article describes the major techniques used for shared memory implementation of Or-parallelism, And-parallelism, and combinations of the two. We also explore some related issues, such as memory management, compile-time analysis, and execution visualization.
Resumo:
Runtime management of distributed information systems is a complex and costly activity. One of the main challenges that must be addressed is obtaining a complete and updated view of all the managed runtime resources. This article presents a monitoring architecture for heterogeneous and distributed information systems. It is composed of two elements: an information model and an agent infrastructure. The model negates the complexity and variability of these systems and enables the abstraction over non-relevant details. The infrastructure uses this information model to monitor and manage the modeled environment, performing and detecting changes in execution time. The agents infrastructure is further detailed and its components and the relationships between them are explained. Moreover, the proposal is validated through a set of agents that instrument the JEE Glassfish application server, paying special attention to support distributed configuration scenarios.