900 resultados para type systems, join calculus, ownership types, process calculus


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The constant increase in digital systems complexity definitely demands the automation of the corresponding synthesis process. This paper presents a computational environment designed to produce both software and hardware implementations of a system. The tool for code generation has been named ACG8051. As for the hardware synthesis there has been produced a larger environment consisting of four programs, namely: PIPE2TAB, AGPS, TABELA, and TAB2VHDL. ACG8051 and PIPE2TAB use place/transition net descriptions from PIPE as inputs. ACG8051 is aimed at generating assembly code for the 8051 micro-controller. PIPE2TAB produces a tabular version of a Mealy type finite state machine of the system, its output is fed into AGPS that is used for state allocation. The resulting digital system is then input to TABELA, which minimizes control functions and outputs of the digital system. Finally, the output generated by TABELA is fed to TAB2VHDL that produces a VHDL description of the system at the register transfer level. Thus, we present here a set of tools designed to take a high-level description of a digital system, represented by a place/transition net, and produces as output both an assembly code that can be immediately run on an 8051 micro-controller, and a VHDL description that can be used to directly implement the hardware parts either on an FPGA or as an ASIC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Realiza o levantamento dos vários tipos de fundações mais comumente utilizadas na construção civil, nas últimas décadas na cidade de Belém PA, definindo suas propriedades, peculiaridades, aspectos técnicos de seu dimensionamento, capacidade de carga e processo executivo. Identifica o tipo de solo, em que estão assentes, com respectivos perfil e características geotécnicas e efetua o mapeamento da cidade por regiões, de acordo com perfil geológico; tipo de fundação; profundidade de assentamento; processo executivo; e custo relativo. O trabalho é dividido em quatro etapas: a primeira correspondente à pesquisa bibliográfica a respeito do estudo das fundações, abrangendo desenvolvimento histórico das fundações e mecânica dos solos, no mundo, Brasil e Belém; pesquisa geotécnica e características do solo de Belém e; estudo dos Sistemas de Informações Geográficas SIG,; a segunda envolvendo levantamento técnico junto as empresas construtoras e firmas de projetos locais, para catalogação e formação de um banco de dados acerca das fundações praticadas na cidade; a terceira relativa a ordenação e análise técnica dos dados coletados e a quarta, consiste na redação do texto final e na elaboração de mapas da cidade, baseados em Sistema de Informação Geográfica - SIG, em escala 1:10.000, concernentes ao tipo de fundação; o perfil geotécnico do terreno; o tipo de solo e processo executivo; profundidade provável para a fundação e, o custo relativo da fundação em relação ao total da obra.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Higher-order process calculi are formalisms for concurrency in which processes can be passed around in communications. Higher-order (or process-passing) concurrency is often presented as an alternative paradigm to the first order (or name-passing) concurrency of the pi-calculus for the description of mobile systems. These calculi are inspired by, and formally close to, the lambda-calculus, whose basic computational step ---beta-reduction--- involves term instantiation. The theory of higher-order process calculi is more complex than that of first-order process calculi. This shows up in, for instance, the definition of behavioral equivalences. A long-standing approach to overcome this burden is to define encodings of higher-order processes into a first-order setting, so as to transfer the theory of the first-order paradigm to the higher-order one. While satisfactory in the case of calculi with basic (higher-order) primitives, this indirect approach falls short in the case of higher-order process calculi featuring constructs for phenomena such as, e.g., localities and dynamic system reconfiguration, which are frequent in modern distributed systems. Indeed, for higher-order process calculi involving little more than traditional process communication, encodings into some first-order language are difficult to handle or do not exist. We then observe that foundational studies for higher-order process calculi must be carried out directly on them and exploit their peculiarities. This dissertation contributes to such foundational studies for higher-order process calculi. We concentrate on two closely interwoven issues in process calculi: expressiveness and decidability. Surprisingly, these issues have been little explored in the higher-order setting. Our research is centered around a core calculus for higher-order concurrency in which only the operators strictly necessary to obtain higher-order communication are retained. We develop the basic theory of this core calculus and rely on it to study the expressive power of issues universally accepted as basic in process calculi, namely synchrony, forwarding, and polyadic communication.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most languages fall into one of two camps: either they adopt a unique, static type system, or they abandon static type-checks for run-time checks. Pluggable types blur this division by (i) making static type systems optional, and (ii) supporting a choice of type systems for reasoning about different kinds of static properties. Dynamic languages can then benefit from static-checking without sacrificing dynamic features or committing to a unique, static type system. But the overhead of adopting pluggable types can be very high, especially if all existing code must be decorated with type annotations before any type-checking can be performed. We propose a practical and pragmatic approach to introduce pluggable type systems to dynamic languages. First of all, only annotated code is type-checked. Second, limited type inference is performed on unannotated code to reduce the number of reported errors. Finally, external annotations can be used to type third-party code. We present Typeplug, a Smalltalk implementation of our framework, and report on experience applying the framework to three different pluggable type systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Como consecuencia del proceso de desalación, se produce el vertido al mar de un agua de rechazo hipersalino o salmuera. La salinidad de este vertido es variable, dependiendo del origen de la captación y del proceso de tratamiento. Muchos de los hábitats y biocenosis de los ecosistemas marinos se encuentran adaptados a ambientes de salinidad casi constante y son muy susceptibles a los incrementos de salinidad originados por estos vertidos. Junto con el vertido de salmuera otro de los principales inconvenientes que plantean las plantas desaladoras es el alto consumo energético, con todas las desventajas que esto supone: alto coste del agua desalada para los consumidores, contaminación del medio... El desarrollo de los métodos de vertido, herramientas de gestión de la salmuera, estudios del comportamiento de la pluma salina… ha buscado la mitigación de estos efectos sobre los ecosistemas marinos. El desarrollo en membranas de ósmosis inversa, diseño de bombas y sistemas de recuperación de energía ha permitido también la reducción del consumo energético en las plantas de desalación. Sin embargo, estos campos parecen haber encontrado un techo tecnológico difícil de rebasar en los últimos tiempos. La energía osmótica se plantea como uno de los caminos a investigar aplicado al campo de la reducción del consumo energético en desalación de agua de mar, a través del aprovechamiento energético de la salmuera. Con esta tesis se pretende cumplir principalmente con los siguientes objetivos: reducción del consumo energético en desalación, mitigar el impacto del vertido sobre el medio y ser una nueva herramienta en la gestión de la salmuera. En el presente documento se plantea el desarrollo de un nuevo proceso que utiliza el fenómeno de la ósmosis directa a través de membranas semipermeables, y busca la sinergia desalación depuración, integrando ambos, en un único proceso de tratamiento dentro del ciclo integral del agua. Para verificar los valores de producción, calidad y rendimiento del proceso, se proyecta y construye una planta piloto ubicada en la Planta Desaladora de Alicante II, escalada de tal manera que permite la realización de los ensayos con equipos comerciales de tamaño mínimo. El objetivo es que el resultado final sea extrapolable a tamaños superiores sin que el escalado afecte a la certeza y fiabilidad de las conclusiones obtenidas. La planta se proyecta de forma que el vertido de una desaladora de ósmosis inversa junto con el vertido de un terciario convencional, se pasan por una ósmosis directa y a continuación por una ósmosis inversa otra vez, ésta última con el objeto de abrir la posibilidad de incrementar la producción de agua potable. Ambas ósmosis están provistas de un sistema de pretratamiento físico-químico (para adecuar la calidad del agua de entrada a las condiciones requeridas por las membranas en ambos casos), y un sistema de limpieza química. En todos los ensayos se usa como fuente de disolución concentrada (agua salada), el rechazo de un bastidor de ósmosis inversa de una desaladora convencional de agua de mar. La fuente de agua dulce marca la distinción entre dos tipos de ensayos: ensayos con el efluente del tratamiento terciario de una depuradora convencional, con lo que se estudia el comportamiento de la membrana ante el ensuciamiento; y ensayos con agua permeada, que permiten estudiar el comportamiento ideal de la membrana. Los resultados de los ensayos con agua salobre ponen de manifiesto problemas de ensuciamiento de la membrana, el caudal de paso a través de la misma disminuye con el tiempo y este efecto se ve incrementado con el aumento de la temperatura del agua. Este fenómeno deriva en una modificación del pretratamiento de la ósmosis directa añadiendo un sistema de ultrafiltración que ha permitido que la membrana presente un comportamiento estable en el tiempo. Los ensayos con agua permeada han hecho posible estudiar el comportamiento “ideal” de la membrana y se han obtenido las condiciones óptimas de operación y a las que se debe tender, consiguiendo tasas de recuperación de energía de 1,6; lo que supone pasar de un consumo de 2,44 kWh/m3 de un tren convencional de ósmosis a 2,28 kWh/m3 al añadir un sistema de ósmosis directa. El objetivo de futuras investigaciones es llegar a tasas de recuperación de 1,9, lo que supondría alcanzar consumos inferiores a 2 kWh/m3. Con esta tesis se concluye que el proceso propuesto permite dar un paso más en la reducción del consumo energético en desalación, además de mitigar los efectos del vertido de salmuera en el medio marino puesto que se reduce tanto el caudal como la salinidad del vertido, siendo además aplicable a plantas ya existentes y planteando importantes ventajas económicas a plantas nuevas, concebidas con este diseño. As a consequence of the desalination process, a discharge of a hypersaline water or brine in the sea is produced. The salinity of these discharges varies, depending on the type of intake and the treatment process. Many of the habitats and biocenosis of marine ecosystems are adapted to an almost constant salinity environment and they are very susceptible to salinity increases caused by these discharges. Besides the brine discharge, another problem posed by desalination plants, is the high energy consumption, with all the disadvantages that this involves: high cost of desalinated water for consumers, environmental pollution ... The development of methods of disposal, brine management tools, studies of saline plume ... has sought the mitigation of these effects on marine ecosystems. The development of reverse osmosis membranes, pump design and energy recovery systems have also enabled the reduction of energy consumption in desalination plants. However, these fields seem to have reached a technological ceiling which is difficult to exceed in recent times. Osmotic power is proposed as a new way to achieve the reduction of energy consumption in seawater desalination, through the energy recovery from the brine. This thesis mainly tries to achieve the following objectives: reduction of energy consumption in desalination, mitigation of the brine discharge impact on the environment and become a new tool in the management of the brine. This paper proposes the development of a new process, that uses the phenomenon of forward osmosis through semipermeable membranes and seeks the synergy desalination-wastewater reuse, combining both into a single treatment process within the integral water cycle. To verify the production, quality and performance of the process we have created a pilot plant. This pilot plant, located in Alicante II desalination plant, has been designed and built in a scale that allows to carry out the tests with minimum size commercial equipment. The aim is that the results can be extrapolated to larger sizes, preventing that the scale affects the accuracy and reliability of the results. In the projected plant, the discharge of a reverse osmosis desalination plant and the effluent of a convencional tertiary treatment of a wastewater plant, go through a forward osmosis module, and then through a reverse osmosis, in order to open the possibility of increasing potable water production. Both osmosis systems are provided with a physicochemical pretreatment (in order to obtain the required conditions for the membranes in both cases), and a chemical cleaning system. In all tests, it is used as a source of concentrated solution (salt water), the rejection of a rack of a conventional reverse osmosis seawater desalination. The source of fresh water makes the difference between two types of tests: test with the effluent from a tertiary treatment of a conventional wastewater treatment plant (these tests study the behavior of the membrane facing the fouling) and tests with permeate, which allow us to study the ideal behavior of the membrane. The results of the tests with brackish water show fouling problems, the flow rate through the membrane decreases with the time and this effect is increased with water temperature. This phenomenon causes the need for a modification of the pretreatment of the direct osmosis module. An ultrafiltration system is added to enable the membrane to present a stable behavior . The tests with permeate have made possible the study of the ideal behavior of the membrane and we have obtained the optimum operating conditions. We have achieved energy recovery rates of 1.6, which allows to move from a consumption of 2.44 kWh/m3 in a conventional train of reverse osmosis to 2.28 kWh / m3 if it is added the direct osmosis system. The goal of future researches is to achieve recovery rates of 1.9, which would allow to reach a consumption lower than 2 kWh/m3. This thesis concludes that the proposed process allows us to take a further step in the reduction of the energy consumption in desalination. We must also add the mitigation of the brine discharge effects on the marine environment, due to the reduction of the flow and salinity of the discharge. This is also applicable to existing plants, and it suggests important economic benefits to new plants that will be built with this design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a multilayered architecture that enhances the capabilities of current QA systems and allows different types of complex questions or queries to be processed. The answers to these questions need to be gathered from factual information scattered throughout different documents. Specifically, we designed a specialized layer to process the different types of temporal questions. Complex temporal questions are first decomposed into simple questions, according to the temporal relations expressed in the original question. In the same way, the answers to the resulting simple questions are recomposed, fulfilling the temporal restrictions of the original complex question. A novel aspect of this approach resides in the decomposition which uses a minimal quantity of resources, with the final aim of obtaining a portable platform that is easily extensible to other languages. In this paper we also present a methodology for evaluation of the decomposition of the questions as well as the ability of the implemented temporal layer to perform at a multilingual level. The temporal layer was first performed for English, then evaluated and compared with: a) a general purpose QA system (F-measure 65.47% for QA plus English temporal layer vs. 38.01% for the general QA system), and b) a well-known QA system. Much better results were obtained for temporal questions with the multilayered system. This system was therefore extended to Spanish and very good results were again obtained in the evaluation (F-measure 40.36% for QA plus Spanish temporal layer vs. 22.94% for the general QA system).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes work completed on the application of H controller synthesis to the design of controllers for single axis high speed independent drive design examples. H controller synthesis was used in a single controller format and in a self-tuning regulator, a type of adaptive controller. Three types of industrial design examples were attempted using H controller synthesis, both in simulation and on a Drives Test Facility at Aston University. The results were benchmarked against a Proportional, Integral and Derivative (PID) with velocity feedforward controller (VFF), the industrial standard for this application. An analysis of the differences between a H and PID with VFF controller was completed. A direct-form H controller was determined for a limited class of weighting function and plants which shows the relationship between the weighting function, nominal plant and the controller parameters. The direct-form controller was utilised in two ways. Firstly it allowed the production of simple guidelines for the industrial design of H controllers. Secondly it was used as the controller modifier in a self-tuning regulator (STR). The STR had a controller modification time (including nominal model parameter estimation) of 8ms. A Set-Point Gain Scheduling (SPGS) controller was developed and applied to an industrial design example. The applicability of each control strategy, PID with VFF, H, SPGS and STR, was investigated and a set of general guidelines for their use was determined. All controllers developed were implemented using standard industrial equipment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lyophilisation or freeze drying is the preferred dehydrating method for pharmaceuticals liable to thermal degradation. Most biologics are unstable in aqueous solution and may use freeze drying to prolong their shelf life. Lyophilisation is however expensive and has seen lots of work aimed at reducing cost. This thesis is motivated by the potential cost savings foreseen with the adoption of a cost efficient bulk drying approach for large and small molecules. Initial studies identified ideal formulations that adapted well to bulk drying and further powder handling requirements downstream in production. Low cost techniques were used to disrupt large dried cakes into powder while the effects of carrier agent concentration were investigated for powder flowability using standard pharmacopoeia methods. This revealed superiority of crystalline mannitol over amorphous sucrose matrices and established that the cohesive and very poor flow nature of freeze dried powders were potential barriers to success. Studies from powder characterisation showed increased powder densification was mainly responsible for significant improvements in flow behaviour and an initial bulking agent concentration of 10-15 %w/v was recommended. Further optimisation studies evaluated the effects of freezing rates and thermal treatment on powder flow behaviour. Slow cooling (0.2 °C/min) with a -25°C annealing hold (2hrs) provided adequate mechanical strength and densification at 0.5-1 M mannitol concentrations. Stable bulk powders require powder transfer into either final vials or intermediate storage closures. The targeted dosing of powder formulations using volumetric and gravimetric powder dispensing systems where evaluated using Immunoglobulin G (IgG), Lactate Dehydrogenase (LDH) and Beta Galactosidase models. Final protein content uniformity in dosed vials was assessed using activity and protein recovery assays to draw conclusions from deviations and pharmacopeia acceptance values. A correlation between very poor flowability (p<0.05), solute concentration, dosing time and accuracy was revealed. LDH and IgG lyophilised in 0.5 M and 1 M mannitol passed Pharmacopeia acceptance values criteria with 0.1-4 while formulations with micro collapse showed the best dose accuracy (0.32-0.4% deviation). Bulk mannitol content above 0.5 M provided no additional benefits to dosing accuracy or content uniformity of dosed units. This study identified considerations which included the type of protein, annealing, cake disruption process, physical form of the phases present, humidity control and recommended gravimetric transfer as optimal for dispensing powder. Dosing lyophilised powders from bulk was demonstrated as practical, time efficient, economical and met regulatory requirements in cases. Finally the use of a new non-destructive technique, X-ray microcomputer tomography (MCT), was explored for cake and particle characterisation. Studies demonstrated good correlation with traditional gas porosimetry (R2 = 0.93) and morphology studies using microscopy. Flow characterisation from sample sizes of less than 1 mL was demonstrated using three dimensional X-ray quantitative image analyses. A platinum-mannitol dispersion model used revealed a relationship between freezing rate, ice nucleation sites and variations in homogeneity within the top to bottom segments of a formulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main principles and experience of development of learning integrated expert systems based on the third generation instrumental complex AT-TECHNOLOGY are considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mathematics Subject Classification: 33C05, 33C10, 33C20, 33C60, 33E12, 33E20, 40A30

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hybridisation is a systematic process along which the characteristic features of hybrid logic, both at the syntactic and the semantic levels, are developed on top of an arbitrary logic framed as an institution. In a series of papers this process has been detailed and taken as a basis for a speci cation methodology for recon gurable systems. The present paper extends this work by showing how a proof calculus (in both a Hilbert and a tableau based format) for the hybridised version of a logic can be systematically generated from a proof calculus for the latter. Such developments provide the basis for a complete proof theory for hybrid(ised) logics, and thus pave the way to the development of (dedicated) proof support.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper consists of a detailed case narrative on how a leading Australian Finance organisation has utilised contemporary Business Process Management (BPM) concepts for improving the IT incident management processes within the whole organisation. The target audience includes practitioners who are interested in BPM case studies and Academics who may be seeking case studies for innovative teaching practices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Where object-oriented languages deal with objects as described by classes, model-driven development uses models, as graphs of interconnected objects, described by metamodels. A number of new languages have been and continue to be developed for this model- based paradigm, both for model transformation and for general programming using models. Many of these use single-object approaches to typing, derived from solutions found in object-oriented systems, while others use metamodels as model types, but without a clear notion of polymorphism. Both of these approaches lead to brittle and overly restrictive reuse characteristics. In this paper we propose a simple extension to object-oriented typing to better cater for a model-oriented context, including a simple strategy for typing models as a collection of interconnected objects. We suggest extensions to existing type system formalisms to support these concepts and their manipulation. Using a simple example we show how this extended approach permits more flexible reuse, while preserving type safety.