832 resultados para Multi-particle systems
Resumo:
For many years, drainage design was mainly about providing sufficient network capacity. This traditional approach had been successful with the aid of computer software and technical guidance. However, the drainage design criteria had been evolving due to rapid population growth, urbanisation, climate change and increasing sustainability awareness. Sustainable drainage systems that bring benefits in addition to water management have been recommended as better alternatives to conventional pipes and storages. Although the concepts and good practice guidance had already been communicated to decision makers and public for years, network capacity still remains a key design focus in many circumstances while the additional benefits are generally considered secondary only. Yet, the picture is changing. The industry begins to realise that delivering multiple benefits should be given the top priority while the drainage service can be considered a secondary benefit instead. The shift in focus means the industry has to adapt to new design challenges. New guidance and computer software are needed to assist decision makers. For this purpose, we developed a new decision support system. The system consists of two main components – a multi-criteria evaluation framework for drainage systems and a multi-objective optimisation tool. Users can systematically quantify the performance, life-cycle costs and benefits of different drainage systems using the evaluation framework. The optimisation tool can assist users to determine combinations of design parameters such as the sizes, order and type of drainage components that maximise multiple benefits. In this paper, we will focus on the optimisation component of the decision support framework. The optimisation problem formation, parameters and general configuration will be discussed. We will also look at the sensitivity of individual variables and the benchmark results obtained using common multi-objective optimisation algorithms. The work described here is the output of an EngD project funded by EPSRC and XP Solutions.
Resumo:
This study contributes a rigorous diagnostic assessment of state-of-the-art multiobjective evolutionary algorithms (MOEAs) and highlights key advances that the water resources field can exploit to better discover the critical tradeoffs constraining our systems. This study provides the most comprehensive diagnostic assessment of MOEAs for water resources to date, exploiting more than 100,000 MOEA runs and trillions of design evaluations. The diagnostic assessment measures the effectiveness, efficiency, reliability, and controllability of ten benchmark MOEAs for a representative suite of water resources applications addressing rainfall-runoff calibration, long-term groundwater monitoring (LTM), and risk-based water supply portfolio planning. The suite of problems encompasses a range of challenging problem properties including (1) many-objective formulations with 4 or more objectives, (2) multi-modality (or false optima), (3) nonlinearity, (4) discreteness, (5) severe constraints, (6) stochastic objectives, and (7) non-separability (also called epistasis). The applications are representative of the dominant problem classes that have shaped the history of MOEAs in water resources and that will be dominant foci in the future. Recommendations are provided for which modern MOEAs should serve as tools and benchmarks in the future water resources literature.
Resumo:
Most of water distribution systems (WDS) need rehabilitation due to aging infrastructure leading to decreasing capacity, increasing leakage and consequently low performance of the WDS. However an appropriate strategy including location and time of pipeline rehabilitation in a WDS with respect to a limited budget is the main challenge which has been addressed frequently by researchers and practitioners. On the other hand, selection of appropriate rehabilitation technique and material types is another main issue which has yet to address properly. The latter can affect the environmental impacts of a rehabilitation strategy meeting the challenges of global warming mitigation and consequent climate change. This paper presents a multi-objective optimization model for rehabilitation strategy in WDS addressing the abovementioned criteria mainly focused on greenhouse gas (GHG) emissions either directly from fossil fuel and electricity or indirectly from embodied energy of materials. Thus, the objective functions are to minimise: (1) the total cost of rehabilitation including capital and operational costs; (2) the leakage amount; (3) GHG emissions. The Pareto optimal front containing optimal solutions is determined using Non-dominated Sorting Genetic Algorithm NSGA-II. Decision variables in this optimisation problem are classified into a number of groups as: (1) percentage proportion of each rehabilitation technique each year; (2) material types of new pipeline for rehabilitation each year. Rehabilitation techniques used here includes replacement, rehabilitation and lining, cleaning, pipe duplication. The developed model is demonstrated through its application to a Mahalat WDS located in central part of Iran. The rehabilitation strategy is analysed for a 40 year planning horizon. A number of conventional techniques for selecting pipes for rehabilitation are analysed in this study. The results show that the optimal rehabilitation strategy considering GHG emissions is able to successfully save the total expenses, efficiently decrease the leakage amount from the WDS whilst meeting environmental criteria.
Resumo:
This paper describes the formulation of a Multi-objective Pipe Smoothing Genetic Algorithm (MOPSGA) and its application to the least cost water distribution network design problem. Evolutionary Algorithms have been widely utilised for the optimisation of both theoretical and real-world non-linear optimisation problems, including water system design and maintenance problems. In this work we present a pipe smoothing based approach to the creation and mutation of chromosomes which utilises engineering expertise with the view to increasing the performance of the algorithm whilst promoting engineering feasibility within the population of solutions. MOPSGA is based upon the standard Non-dominated Sorting Genetic Algorithm-II (NSGA-II) and incorporates a modified population initialiser and mutation operator which directly targets elements of a network with the aim to increase network smoothness (in terms of progression from one diameter to the next) using network element awareness and an elementary heuristic. The pipe smoothing heuristic used in this algorithm is based upon a fundamental principle employed by water system engineers when designing water distribution pipe networks where the diameter of any pipe is never greater than the sum of the diameters of the pipes directly upstream resulting in the transition from large to small diameters from source to the extremities of the network. MOPSGA is assessed on a number of water distribution network benchmarks from the literature including some real-world based, large scale systems. The performance of MOPSGA is directly compared to that of NSGA-II with regard to solution quality, engineering feasibility (network smoothness) and computational efficiency. MOPSGA is shown to promote both engineering and hydraulic feasibility whilst attaining good infrastructure costs compared to NSGA-II.
Resumo:
In this paper the architecture of an experimental multiparadigmatic programming environment is sketched, showing how its parts combine together with application modules in order to perform the integration of program modules written in different programming languages and paradigms. Adaptive automata are special self-modifying formal state machines used as a design and implementation tool in the representation of complex systems. Adaptive automata have been proven to have the same formal power as Turing Machines. Therefore, at least in theory, arbitrarily complex systems may be modeled with adaptive automata. The present work briefly introduces such formal tool and presents case studies showing how to use them in two very different situations: the first one, in the name management module of a multi-paradigmatic and multi-language programming environment, and the second one, in an application program implementing an adaptive automaton that accepts a context-sensitive language.
Resumo:
Electronic applications are currently developed under the reuse-based paradigm. This design methodology presents several advantages for the reduction of the design complexity, but brings new challenges for the test of the final circuit. The access to embedded cores, the integration of several test methods, and the optimization of the several cost factors are just a few of the several problems that need to be tackled during test planning. Within this context, this thesis proposes two test planning approaches that aim at reducing the test costs of a core-based system by means of hardware reuse and integration of the test planning into the design flow. The first approach considers systems whose cores are connected directly or through a functional bus. The test planning method consists of a comprehensive model that includes the definition of a multi-mode access mechanism inside the chip and a search algorithm for the exploration of the design space. The access mechanism model considers the reuse of functional connections as well as partial test buses, cores transparency, and other bypass modes. The test schedule is defined in conjunction with the access mechanism so that good trade-offs among the costs of pins, area, and test time can be sought. Furthermore, system power constraints are also considered. This expansion of concerns makes it possible an efficient, yet fine-grained search, in the huge design space of a reuse-based environment. Experimental results clearly show the variety of trade-offs that can be explored using the proposed model, and its effectiveness on optimizing the system test plan. Networks-on-chip are likely to become the main communication platform of systemson- chip. Thus, the second approach presented in this work proposes the reuse of the on-chip network for the test of the cores embedded into the systems that use this communication platform. A power-aware test scheduling algorithm aiming at exploiting the network characteristics to minimize the system test time is presented. The reuse strategy is evaluated considering a number of system configurations, such as different positions of the cores in the network, power consumption constraints and number of interfaces with the tester. Experimental results show that the parallelization capability of the network can be exploited to reduce the system test time, whereas area and pin overhead are strongly minimized. In this manuscript, the main problems of the test of core-based systems are firstly identified and the current solutions are discussed. The problems being tackled by this thesis are then listed and the test planning approaches are detailed. Both test planning techniques are validated for the recently released ITC’02 SoC Test Benchmarks, and further compared to other test planning methods of the literature. This comparison confirms the efficiency of the proposed methods.
Resumo:
The focus of this thesis is to discuss the development and modeling of an interface architecture to be employed for interfacing analog signals in mixed-signal SOC. We claim that the approach that is going to be presented is able to achieve wide frequency range, and covers a large range of applications with constant performance, allied to digital configuration compatibility. Our primary assumptions are to use a fixed analog block and to promote application configurability in the digital domain, which leads to a mixed-signal interface. The use of a fixed analog block avoids the performance loss common to configurable analog blocks. The usage of configurability on the digital domain makes possible the use of all existing tools for high level design, simulation and synthesis to implement the target application, with very good performance prediction. The proposed approach utilizes the concept of frequency translation (mixing) of the input signal followed by its conversion to the ΣΔ domain, which makes possible the use of a fairly constant analog block, and also, a uniform treatment of input signal from DC to high frequencies. The programmability is performed in the ΣΔ digital domain where performance can be closely achieved according to application specification. The interface performance theoretical and simulation model are developed for design space exploration and for physical design support. Two prototypes are built and characterized to validate the proposed model and to implement some application examples. The usage of this interface as a multi-band parametric ADC and as a two channels analog multiplier and adder are shown. The multi-channel analog interface architecture is also presented. The characterization measurements support the main advantages of the approach proposed.
Resumo:
The work described in this thesis aims to support the distributed design of integrated systems and considers specifically the need for collaborative interaction among designers. Particular emphasis was given to issues which were only marginally considered in previous approaches, such as the abstraction of the distribution of design automation resources over the network, the possibility of both synchronous and asynchronous interaction among designers and the support for extensible design data models. Such issues demand a rather complex software infrastructure, as possible solutions must encompass a wide range of software modules: from user interfaces to middleware to databases. To build such structure, several engineering techniques were employed and some original solutions were devised. The core of the proposed solution is based in the joint application of two homonymic technologies: CAD Frameworks and object-oriented frameworks. The former concept was coined in the late 80's within the electronic design automation community and comprehends a layered software environment which aims to support CAD tool developers, CAD administrators/integrators and designers. The latter, developed during the last decade by the software engineering community, is a software architecture model to build extensible and reusable object-oriented software subsystems. In this work, we proposed to create an object-oriented framework which includes extensible sets of design data primitives and design tool building blocks. Such object-oriented framework is included within a CAD Framework, where it plays important roles on typical CAD Framework services such as design data representation and management, versioning, user interfaces, design management and tool integration. The implemented CAD Framework - named Cave2 - followed the classical layered architecture presented by Barnes, Harrison, Newton and Spickelmier, but the possibilities granted by the use of the object-oriented framework foundations allowed a series of improvements which were not available in previous approaches: - object-oriented frameworks are extensible by design, thus this should be also true regarding the implemented sets of design data primitives and design tool building blocks. This means that both the design representation model and the software modules dealing with it can be upgraded or adapted to a particular design methodology, and that such extensions and adaptations will still inherit the architectural and functional aspects implemented in the object-oriented framework foundation; - the design semantics and the design visualization are both part of the object-oriented framework, but in clearly separated models. This allows for different visualization strategies for a given design data set, which gives collaborating parties the flexibility to choose individual visualization settings; - the control of the consistency between semantics and visualization - a particularly important issue in a design environment with multiple views of a single design - is also included in the foundations of the object-oriented framework. Such mechanism is generic enough to be also used by further extensions of the design data model, as it is based on the inversion of control between view and semantics. The view receives the user input and propagates such event to the semantic model, which evaluates if a state change is possible. If positive, it triggers the change of state of both semantics and view. Our approach took advantage of such inversion of control and included an layer between semantics and view to take into account the possibility of multi-view consistency; - to optimize the consistency control mechanism between views and semantics, we propose an event-based approach that captures each discrete interaction of a designer with his/her respective design views. The information about each interaction is encapsulated inside an event object, which may be propagated to the design semantics - and thus to other possible views - according to the consistency policy which is being used. Furthermore, the use of event pools allows for a late synchronization between view and semantics in case of unavailability of a network connection between them; - the use of proxy objects raised significantly the abstraction of the integration of design automation resources, as either remote or local tools and services are accessed through method calls in a local object. The connection to remote tools and services using a look-up protocol also abstracted completely the network location of such resources, allowing for resource addition and removal during runtime; - the implemented CAD Framework is completely based on Java technology, so it relies on the Java Virtual Machine as the layer which grants the independence between the CAD Framework and the operating system. All such improvements contributed to a higher abstraction on the distribution of design automation resources and also introduced a new paradigm for the remote interaction between designers. The resulting CAD Framework is able to support fine-grained collaboration based on events, so every single design update performed by a designer can be propagated to the rest of the design team regardless of their location in the distributed environment. This can increase the group awareness and allow a richer transfer of experiences among them, improving significantly the collaboration potential when compared to previously proposed file-based or record-based approaches. Three different case studies were conducted to validate the proposed approach, each one focusing one a subset of the contributions of this thesis. The first one uses the proxy-based resource distribution architecture to implement a prototyping platform using reconfigurable hardware modules. The second one extends the foundations of the implemented object-oriented framework to support interface-based design. Such extensions - design representation primitives and tool blocks - are used to implement a design entry tool named IBlaDe, which allows the collaborative creation of functional and structural models of integrated systems. The third case study regards the possibility of integration of multimedia metadata to the design data model. Such possibility is explored in the frame of an online educational and training platform.
Resumo:
As tecnologias da informação e comunicação (TIC) estão presentes nas mais diversas áreas e atividades cotidianas, mas, em que pesem as ações de governos e instituições privadas, a informatização da saúde ainda é um desafio em aberto no Brasil. A situação atual leva a um questionamento sobre as dificuldades associadas à informatização das práticas em saúde, assim como, quais efeitos tais dificuldades têm causado à sociedade Brasileira. Com objetivo de discutir as questões acima citadas, esta tese apresenta quatro artigos sobre processo de informação da saúde no Brasil. O primeiro artigo revisa a literatura sobre TIC em saúde e baseado em duas perspectivas teóricas – estudos Europeus acerca dos Sistemas de Informação em Saúde (SIS) nos Países em Desenvolvimento e estudos sobre Informação e Informática em Saúde, no âmbito do Movimento da Reforma Sanitária –, formula um modelo integrado que combina dimensões de análise e fatores contextuais para a compreensão dos SIS no Brasil. Já o segundo artigo apresenta os conceitos e teóricos e metodológicos da Teoria Ator-Rede (ANT), uma abordagem para o estudo de controvérsias associadas às descobertas científicas e inovações tecnológicas, por meio das redes de atores envolvidos em tais ações. Tal abordagem tem embasado estudos de SI desde 1990 e inspirou as análises dois artigos empíricos desta tese. Os dois últimos artigos foram redigidos a partir da análise da implantação de um SIS em um hospital público no Brasil ocorrida entre os anos de 2010 e 2012. Para a análise do caso, seguiram-se os atores envolvidos nas controvérsias que surgiram durante a implantação do SIS. O terceiro artigo se debruçou sobre as atividades dos analistas de sistema e usuários envolvidos na implantação do SIS. As mudanças observadas durante a implantação do sistema revelam que o sucesso do SIS não foi alcançado pela estrita e técnica execução das atividades incialmente planejadas. Pelo contrário, o sucesso foi construído coletivamente, por meio da negociação entre os atores e de dispositivos de interessamento introduzidos durante o projeto. O quarto artigo, baseado no conceito das Infraestruturas de Informação, discutiu como o sistema CATMAT foi incorporado ao E-Hosp. A análise revelou como a base instalada do CATMAT foi uma condição relevante para a sua escolha durante a implantação do E-Hosp. Além disso, descrevem-se negociações e operações heterogêneas que aconteceram durante a incorporação do CATMAT no sistema E-Hosp. Assim, esta tese argumenta que a implantação de um SIS é um empreendimento de construção coletiva, envolvendo analistas de sistema, profissionais de saúde, políticos e artefatos técnicos. Ademais, evidenciou-se como os SIS inscrevem definições e acordos, influenciando as preferências dos atores na área de saúde.
Resumo:
In many creative and technical areas, professionals make use of paper sketches for developing and expressing concepts and models. Paper offers an almost constraint free environment where they have as much freedom to express themselves as they need. However, paper does have some disadvantages, such as size and not being able to manipulate the content (other than remove it or scratch it), which can be overcome by creating systems that can offer the same freedom people have from paper but none of the disadvantages and limitations. Only in recent years has the technology become massively available that allows doing precisely that, with the development in touch‐sensitive screens that also have the ability to interact with a stylus. In this project a prototype was created with the objective of finding a set of the most useful and usable interactions, which are composed of combinations of multi‐touch and pen. The project selected Computer Aided Software Engineering (CASE) tools as its application domain, because it addresses a solid and well‐defined discipline with still sufficient room for new developments. This was the result from the area research conducted to find an application domain, which involved analyzing sketching tools from several possible areas and domains. User studies were conducted using Model Driven Inquiry (MDI) to have a better understanding of the human sketch creation activities and concepts devised. Then the prototype was implemented, through which it was possible to execute user evaluations of the interaction concepts created. Results validated most interactions, in the face of limited testing only being possible at the time. Users had more problems using the pen, however handwriting and ink recognition were very effective, and users quickly learned the manipulations and gestures from the Natural User Interface (NUI).
Resumo:
Nowadays, the development of intelligent agents intends to be more refined, using improved architectures and reasoning mechanisms. Revise the beliefs of an agent is also an important subject, due to the consistency that agents should have about their knowledge. In this work we propose deliberative and argumentative agents using Lego Mindstorms robots, Argumentative NXT BDI-like Agents. These agents are built using the notions of the BDI model and they are capable to reason using the DeLP formalism. They update their knowledge base with their perceptions and revise it when necessary. Two variations are presented: the Single Argumentative NXT BDI-like Agent and the MAS Argumentative NXT BDI-like Agent.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This work addresses issues related to analysis and development of multivariable predictive controllers based on bilinear multi-models. Linear Generalized Predictive Control (GPC) monovariable and multivariable is shown, and highlighted its properties, key features and applications in industry. Bilinear GPC, the basis for the development of this thesis, is presented by the time-step quasilinearization approach. Some results are presented using this controller in order to show its best performance when compared to linear GPC, since the bilinear models represent better the dynamics of certain processes. Time-step quasilinearization, due to the fact that it is an approximation, causes a prediction error, which limits the performance of this controller when prediction horizon increases. Due to its prediction error, Bilinear GPC with iterative compensation is shown in order to minimize this error, seeking a better performance than the classic Bilinear GPC. Results of iterative compensation algorithm are shown. The use of multi-model is discussed in this thesis, in order to correct the deficiency of controllers based on single model, when they are applied in cases with large operation ranges. Methods of measuring the distance between models, also called metrics, are the main contribution of this thesis. Several application results in simulated distillation columns, which are close enough to actual behaviour of them, are made, and the results have shown satisfactory
Resumo:
The aim of this study was to develop multiparticulate therapeutic systems of alginate (AL) and chitosan (CS) containing triamcinolone (TC) to colonic drug delivery. Multiparticulate systems of AL-CS, prepared by a complex coacervation/ionotropic gelation method, were characterized for morphological and size aspects, swelling degree, encapsulation content and efficiency, in vitro release profile in different environments simulating the gastrointestinal tract (GIT) and in vivo gastrointestinal transit. The systems showed suitable morphological characteristics with particle diameters of approximately 1.6 mm. In simulated gastric environment, at pH 1.2, the capsules presented low degree of swelling and in vitro release of drug. A higher swelling degree was observed in simulated enteric environment, pH 7.5, followed by erosion. Practically all the drug was released after 6 h of in vitro assay. The in vivo analysis of gastrointestinal transit, carried out in rats, showed that the systems passed practically intact through the stomach and did not show the same profile of swelling observed in the in vitro tests. It was possible to verify the presence of capsules in the colonic region of GIT. The results indicate that AL-CS multiparticulate systems can be used as an adjuvant for the preparation of therapeutic systems to colonic delivery of drugs. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper aims to analyze dual-purpose systems focusing the total cost optimization; a superstructure is proposed to present cogeneration systems and desalination technologies alternatives for the synthesis process. The superstructure consists of excluding components, gas turbines or conventional steam generators with excluding alternatives of supplying fuel for each combustion system. Also, backpressure or condensing/extraction steam turbine for supplying process steam could be selected. Finally one desalination unit chosen between electrically-driven or steam-driven reverse osmosis. multi-effect and multistage flash should be included. The analysis herein performed is based on energy and mass conservation equations, as well as the technological limiting equation of equipment. The results for ten different commercial gas turbines revealed that electrically-driven reverse osmosis was always chosen together with both natural gas and gasified biomass gas turbines. (C) 2009 Elsevier B.V. All rights reserved.