835 resultados para behavior-based systems
Resumo:
Mirroring the paper versions exchanged between businesses today, electronic contracts offer the possibility of dynamic, automatic creation and enforcement of restrictions and compulsions on agent behaviour that are designed to ensure business objectives are met. However, where there are many contracts within a particular application, it can be difficult to determine whether the system can reliably fulfil them all; computer-parsable electronic contracts may allow such verification to be automated. In this paper, we describe a conceptual framework and architecture specification in which normative business contracts can be electronically represented, verified, established, renewed, etc. In particular, we aim to allow systems containing multiple contracts to be checked for conflicts and violations of business objectives. We illustrate the framework and architecture with an aerospace example.
Resumo:
Of the ways in which agent behaviour can be regulated in a multiagent system, electronic contracting – based on explicit representation of different parties' responsibilities, and the agreement of all parties to them – has significant potential for modern industrial applications. Based on this assumption, the CONTRACT project aims to develop and apply electronic contracting and contract-based monitoring and verification techniques in real world applications. This paper presents results from the initial phase of the project, which focused on requirements solicitation and analysis. Specifically, we survey four use cases from diverse industrial applications, examine how they can benefit from an agent-based electronic contracting infrastructure and outline the technical requirements that would be placed on such an infrastructure. We present the designed CONTRACT architecture and describe how it may fulfil these requirements. In addition to motivating our work on the contract-based infrastructure, the paper aims to provide a much needed community resource in terms of use case themselves and to provide a clear commercial context for the development of work on contract-based system.
Resumo:
Electronic applications are currently developed under the reuse-based paradigm. This design methodology presents several advantages for the reduction of the design complexity, but brings new challenges for the test of the final circuit. The access to embedded cores, the integration of several test methods, and the optimization of the several cost factors are just a few of the several problems that need to be tackled during test planning. Within this context, this thesis proposes two test planning approaches that aim at reducing the test costs of a core-based system by means of hardware reuse and integration of the test planning into the design flow. The first approach considers systems whose cores are connected directly or through a functional bus. The test planning method consists of a comprehensive model that includes the definition of a multi-mode access mechanism inside the chip and a search algorithm for the exploration of the design space. The access mechanism model considers the reuse of functional connections as well as partial test buses, cores transparency, and other bypass modes. The test schedule is defined in conjunction with the access mechanism so that good trade-offs among the costs of pins, area, and test time can be sought. Furthermore, system power constraints are also considered. This expansion of concerns makes it possible an efficient, yet fine-grained search, in the huge design space of a reuse-based environment. Experimental results clearly show the variety of trade-offs that can be explored using the proposed model, and its effectiveness on optimizing the system test plan. Networks-on-chip are likely to become the main communication platform of systemson- chip. Thus, the second approach presented in this work proposes the reuse of the on-chip network for the test of the cores embedded into the systems that use this communication platform. A power-aware test scheduling algorithm aiming at exploiting the network characteristics to minimize the system test time is presented. The reuse strategy is evaluated considering a number of system configurations, such as different positions of the cores in the network, power consumption constraints and number of interfaces with the tester. Experimental results show that the parallelization capability of the network can be exploited to reduce the system test time, whereas area and pin overhead are strongly minimized. In this manuscript, the main problems of the test of core-based systems are firstly identified and the current solutions are discussed. The problems being tackled by this thesis are then listed and the test planning approaches are detailed. Both test planning techniques are validated for the recently released ITC’02 SoC Test Benchmarks, and further compared to other test planning methods of the literature. This comparison confirms the efficiency of the proposed methods.
Resumo:
Atualmente, há diferentes definições de implicações fuzzy aceitas na literatura. Do ponto de vista teórico, esta falta de consenso demonstra que há discordâncias sobre o real significado de "implicação lógica" nos contextos Booleano e fuzzy. Do ponto de vista prático, isso gera dúvidas a respeito de quais "operadores de implicação" os engenheiros de software devem considerar para implementar um Sistema Baseado em Regras Fuzzy (SBRF). Uma escolha ruim destes operadores pode implicar em SBRF's com menor acurácia e menos apropriados aos seus domínios de aplicação. Uma forma de contornar esta situação e conhecer melhor os conectivos lógicos fuzzy. Para isso se faz necessário saber quais propriedades tais conectivos podem satisfazer. Portanto, a m de corroborar com o significado de implicação fuzzy e corroborar com a implementação de SBRF's mais apropriados, várias leis Booleanas têm sido generalizadas e estudadas como equações ou inequações nas lógicas fuzzy. Tais generalizações são chamadas de leis Boolean-like e elas não são comumente válidas em qualquer semântica fuzzy. Neste cenário, esta dissertação apresenta uma investigação sobre as condições suficientes e necessárias nas quais três leis Booleanlike like — y ≤ I(x, y), I(x, I(y, x)) = 1 e I(x, I(y, z)) = I(I(x, y), I(x, z)) — se mantém válidas no contexto fuzzy, considerando seis classes de implicações fuzzy e implicações geradas por automorfismos. Além disso, ainda no intuito de implementar SBRF's mais apropriados, propomos uma extensão para os mesmos
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The influence of shear fields on water-based systems was investigated within this thesis. The non-linear rheological behaviour of spherical and rod-like particles was examined with Fourier-Transform rheology under LAOS conditions. As a model system for spherical particles two different kinds of polystyrene dispersions, with a solid content higher than 0.3 each, were synthesised within this work. Due to the differences in polydispersity and Debye-length, differences were also found in the rheology. In the FT-rheology both kinds of dispersions showed a similar rise in the intensities of the magnitudes of the odd higher harmonics, which were predicted by a model. The in some cases additionally appearing second harmonics were not predicted. A novel method to analyse the time domain signal was developed, that splits the time domain signal up in four characteristic functions. Those characteristic functions correspond to rheological phenomena. In some cases the intensities of the Fourier components can interfere negatively. FD-virus particles were used as a rod-like model system, which already shows a highly non-linear behaviour at concentrations below 1. % wt. Predictions for the dependence of the higher harmonics from the strain amplitude described the non-linear behaviour well at large, but no so good at small strain amplitudes. Additionally the trends of the rheological behaviour could be described by a theory for rod-like particles. An existing rheo-optical set-up was enhanced by reducing the background birefringence by a factor of 20 and by increasing the time resolution by a factor of 24. Additionally a combination of FT-rheology and rheo-optics was achieved. The influence of a constant shear field on the crystallisation process of zinc oxide in the presence of a polymer was examined. The crystallites showed a reduction in length by a factor of 2. The directed addition of polymers in combination with a defined shear field can be an easy way for a defined change of the form of crystallites.
Resumo:
The aim of this thesis was to apply the techniques of the atomic force microscope (AFM) to biological samples, namely lipid-based systems. To this end several systems with biological relevance based on self-assembly, such as a solid-supported membrane (SSM) based sensor for transport proteins, a bilayer of the natural lipid extract from an archaebacterium, and synaptic vesicles, were investigated by the AFM. For the characterization of transport proteins with SSM-sensors proteoliposomes are adsorbed that contain the analyte (transport protein). However the forces governing bilayer-bilayer interactions in solution should be repulsive under physiological conditions. I investigated the nature of the interaction forces with AFM force spectroscopy by mimicking the adsorbing proteoliposome with a cantilever tip, which was functionalized with charged alkane thiols. The nature of the interaction is indeed repulsive, but the lipid layers assemble in stacks on the SSM, which expose their unfavourable edges to the medium. I propose a model by which the proteoliposomes interact with these edges and fuse with the bilayer stacks, so forming a uniform layer on the SSM. Furthermore I characterized freestanding bilayers from a synthetic phospholipid with a phase transition at 41°C and from a natural lipid extract of the archaebacterium Methanococcus jannaschii. The synthetic lipid is in the gel-phase at room temperature and changes to the fluid phase when heated to 50°C. The bilayer of the lipid extract shows no phase transition when heated from room temperature to the growth temperature (~ 50°C) of the archeon. Synaptic vesicles are the containers of neurotransmitter in nerve cells and the synapsins are a family of extrinsic membrane proteins, that are associated with them, and believed to control the synaptic vesicle cycle. I used AFM imaging and force spectroscopy together with dynamic light scattering to investigate the influence of synapsin I on synaptic vesicles. To this end I used native, untreated synaptic vesicles and compared them to synapsin-depleted synaptic vesicles. Synapsin-depleted vesicles were larger in size and showed a higher tendency to aggregate compared to native vesicles, although their mechanical properties were alike. I also measured the aggregation kinetics of synaptic vesicles induced by synapsin I and found that the addition of synapsin I promotes a rapid aggregation of synaptic vesicles. The data indicate that synapsin I affects the stability and the aggregation state of synaptic vesicles, and confirm the physiological role of synapsins in the assembly and regulation of synaptic vesicle pools within nerve cells.
Resumo:
The confluence of three-dimensional (3D) virtual worlds with social networks imposes on software agents, in addition to conversational functions, the same behaviours as those common to human-driven avatars. In this paper, we explore the possibilities of the use of metabots (metaverse robots) with motion capabilities in complex virtual 3D worlds and we put forward a learning model based on the techniques used in evolutionary computation for optimizing the fuzzy controllers which will subsequently be used by metabots for moving around a virtual environment.
Resumo:
The aim of this chapter is to discuss the applicability of recently proposed knowledge modelling tools to the development of agent-based systems. The discussion is derived from the real world experience of a particular software tool called KSM (Knowledge Structure Manager). The chapter provides details about this tool and then proceeds to show in which forms the software may be used to support the development of agent-based systems. Two multiagent systems, one in the field of telecommunications management and the other one in the field of flood control, are described. Conclusions about these studies are presented, summarizing the main contributions that knowledge modelling tools can bring to the development of agent-based systems.
Resumo:
Optical filters are crucial elements in optical communication networks. Their influence toward the optical signal will affect the communication quality seriously. In this paper we will study and simulate the optical signal impairment and crosstalk penalty caused by different kinds of filters, which include Butterworth, Bessel, Fiber Bragg Grating (FBG) and Fabry-Perot (F-P). Signal impairment from filter concatenation effect and crosstalk penalty from out-band and in-band are analyzed from Q-penalty, eye opening penalty (EOP) and optical spectrum. The simulation results show that signal impairment and crosstalk penalty induced by the Butterworth filter is the minimum among these four types of filters. Signal impairment caused by filter concatenation effect shows that when center frequency of all filters is aligned perfectly with the laser's frequency, 12 50-GHz Butterworth filters can be cascaded, with 1-dB EOP. This value is reduced to 9 when the center frequency is misaligned with 5 GHz. In the 50-GHz channel spacing DWDM networks, total Q-penalty induced by a pair of Butterworth filters based demultiplexer and multiplexer is lower than 0.5 dB when the filter bandwidth is in the range of 42-46 GHz.
Resumo:
Debido al gran incremento de datos digitales que ha tenido lugar en los últimos años, ha surgido un nuevo paradigma de computación paralela para el procesamiento eficiente de grandes volúmenes de datos. Muchos de los sistemas basados en este paradigma, también llamados sistemas de computación intensiva de datos, siguen el modelo de programación de Google MapReduce. La principal ventaja de los sistemas MapReduce es que se basan en la idea de enviar la computación donde residen los datos, tratando de proporcionar escalabilidad y eficiencia. En escenarios libres de fallo, estos sistemas generalmente logran buenos resultados. Sin embargo, la mayoría de escenarios donde se utilizan, se caracterizan por la existencia de fallos. Por tanto, estas plataformas suelen incorporar características de tolerancia a fallos y fiabilidad. Por otro lado, es reconocido que las mejoras en confiabilidad vienen asociadas a costes adicionales en recursos. Esto es razonable y los proveedores que ofrecen este tipo de infraestructuras son conscientes de ello. No obstante, no todos los enfoques proporcionan la misma solución de compromiso entre las capacidades de tolerancia a fallo (o de manera general, las capacidades de fiabilidad) y su coste. Esta tesis ha tratado la problemática de la coexistencia entre fiabilidad y eficiencia de los recursos en los sistemas basados en el paradigma MapReduce, a través de metodologías que introducen el mínimo coste, garantizando un nivel adecuado de fiabilidad. Para lograr esto, se ha propuesto: (i) la formalización de una abstracción de detección de fallos; (ii) una solución alternativa a los puntos únicos de fallo de estas plataformas, y, finalmente, (iii) un nuevo sistema de asignación de recursos basado en retroalimentación a nivel de contenedores. Estas contribuciones genéricas han sido evaluadas tomando como referencia la arquitectura Hadoop YARN, que, hoy en día, es la plataforma de referencia en la comunidad de los sistemas de computación intensiva de datos. En la tesis se demuestra cómo todas las contribuciones de la misma superan a Hadoop YARN tanto en fiabilidad como en eficiencia de los recursos utilizados. ABSTRACT Due to the increase of huge data volumes, a new parallel computing paradigm to process big data in an efficient way has arisen. Many of these systems, called dataintensive computing systems, follow the Google MapReduce programming model. The main advantage of these systems is based on the idea of sending the computation where the data resides, trying to provide scalability and efficiency. In failure-free scenarios, these frameworks usually achieve good results. However, these ones are not realistic scenarios. Consequently, these frameworks exhibit some fault tolerance and dependability techniques as built-in features. On the other hand, dependability improvements are known to imply additional resource costs. This is reasonable and providers offering these infrastructures are aware of this. Nevertheless, not all the approaches provide the same tradeoff between fault tolerant capabilities (or more generally, reliability capabilities) and cost. In this thesis, we have addressed the coexistence between reliability and resource efficiency in MapReduce-based systems, looking for methodologies that introduce the minimal cost and guarantee an appropriate level of reliability. In order to achieve this, we have proposed: (i) a formalization of a failure detector abstraction; (ii) an alternative solution to single points of failure of these frameworks, and finally (iii) a novel feedback-based resource allocation system at the container level. Finally, our generic contributions have been instantiated for the Hadoop YARN architecture, which is the state-of-the-art framework in the data-intensive computing systems community nowadays. The thesis demonstrates how all our approaches outperform Hadoop YARN in terms of reliability and resource efficiency.
Resumo:
LLas nuevas tecnologías orientadas a la nube, el internet de las cosas o las tendencias "as a service" se basan en el almacenamiento y procesamiento de datos en servidores remotos. Para garantizar la seguridad en la comunicación de dichos datos al servidor remoto, y en el manejo de los mismos en dicho servidor, se hace uso de diferentes esquemas criptográficos. Tradicionalmente, dichos sistemas criptográficos se centran en encriptar los datos mientras no sea necesario procesarlos (es decir, durante la comunicación y almacenamiento de los mismos). Sin embargo, una vez es necesario procesar dichos datos encriptados (en el servidor remoto), es necesario desencriptarlos, momento en el cual un intruso en dicho servidor podría a acceder a datos sensibles de usuarios del mismo. Es más, este enfoque tradicional necesita que el servidor sea capaz de desencriptar dichos datos, teniendo que confiar en la integridad de dicho servidor de no comprometer los datos. Como posible solución a estos problemas, surgen los esquemas de encriptación homomórficos completos. Un esquema homomórfico completo no requiere desencriptar los datos para operar con ellos, sino que es capaz de realizar las operaciones sobre los datos encriptados, manteniendo un homomorfismo entre el mensaje cifrado y el mensaje plano. De esta manera, cualquier intruso en el sistema no podría robar más que textos cifrados, siendo imposible un robo de los datos sensibles sin un robo de las claves de cifrado. Sin embargo, los esquemas de encriptación homomórfica son, actualmente, drás-ticamente lentos comparados con otros esquemas de encriptación clásicos. Una op¬eración en el anillo del texto plano puede conllevar numerosas operaciones en el anillo del texto encriptado. Por esta razón, están surgiendo distintos planteamientos sobre como acelerar estos esquemas para un uso práctico. Una de las propuestas para acelerar los esquemas homomórficos consiste en el uso de High-Performance Computing (HPC) usando FPGAs (Field Programmable Gate Arrays). Una FPGA es un dispositivo semiconductor que contiene bloques de lógica cuya interconexión y funcionalidad puede ser reprogramada. Al compilar para FPGAs, se genera un circuito hardware específico para el algorithmo proporcionado, en lugar de hacer uso de instrucciones en una máquina universal, lo que supone una gran ventaja con respecto a CPUs. Las FPGAs tienen, por tanto, claras difrencias con respecto a CPUs: -Arquitectura en pipeline: permite la obtención de outputs sucesivos en tiempo constante -Posibilidad de tener multiples pipes para computación concurrente/paralela. Así, en este proyecto: -Se realizan diferentes implementaciones de esquemas homomórficos en sistemas basados en FPGAs. -Se analizan y estudian las ventajas y desventajas de los esquemas criptográficos en sistemas basados en FPGAs, comparando con proyectos relacionados. -Se comparan las implementaciones con trabajos relacionados New cloud-based technologies, the internet of things or "as a service" trends are based in data storage and processing in a remote server. In order to guarantee a secure communication and handling of data, cryptographic schemes are used. Tradi¬tionally, these cryptographic schemes focus on guaranteeing the security of data while storing and transferring it, not while operating with it. Therefore, once the server has to operate with that encrypted data, it first decrypts it, exposing unencrypted data to intruders in the server. Moreover, the whole traditional scheme is based on the assumption the server is reliable, giving it enough credentials to decipher data to process it. As a possible solution for this issues, fully homomorphic encryption(FHE) schemes is introduced. A fully homomorphic scheme does not require data decryption to operate, but rather operates over the cyphertext ring, keeping an homomorphism between the cyphertext ring and the plaintext ring. As a result, an outsider could only obtain encrypted data, making it impossible to retrieve the actual sensitive data without its associated cypher keys. However, using homomorphic encryption(HE) schemes impacts performance dras-tically, slowing it down. One operation in the plaintext space can lead to several operations in the cyphertext space. Because of this, different approaches address the problem of speeding up these schemes in order to become practical. One of these approaches consists in the use of High-Performance Computing (HPC) using FPGAs (Field Programmable Gate Array). An FPGA is an integrated circuit designed to be configured by a customer or a designer after manufacturing - hence "field-programmable". Compiling into FPGA means generating a circuit (hardware) specific for that algorithm, instead of having an universal machine and generating a set of machine instructions. FPGAs have, thus, clear differences compared to CPUs: - Pipeline architecture, which allows obtaining successive outputs in constant time. -Possibility of having multiple pipes for concurrent/parallel computation. Thereby, In this project: -We present different implementations of FHE schemes in FPGA-based systems. -We analyse and study advantages and drawbacks of the implemented FHE schemes, compared to related work.