980 resultados para Resilience engineering perspectives


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to investigate the cost management practices of building industry companies of Parana that follow the typology of Porter's strategies. The sample comprises member companies of the Association of Construction Industries of the State of Parana (PR-SINDUSCON) operating in the segment of residential buildings. The data were collected by means of questionnaires sent to 317 SINDUSCON members. 69 were returned and 54 used for our research. Exploratory Factorial Analysis of the data allowed us to identify two groups of cost management practices. Analyses suggest equality between the adopted cost management practices and the Cost Control Planning (CCP) practices among the companies of the Group 1, regardless of the generic strategy adopted. The companies of the Group 2 that adopted the differentiation strategy seem to use mainly the ACR cost management practice. Our findings differ from those obtained by Chenhall insofar as companies that adopt low cost strategies tend to use managerial controls focused on cost control and rigid budgetary controls.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents an overview of relevant issues to be considered in the development of standardized phytochemical preparations, focusing on the use of the spouted bed as a drying method. Aspects related to the effects of feed composition properties and processing parameters on system performance and product quality are addressed. From the information presented, it can be concluded that the spouted bed technology can be successfully applied for production of high-quality phytochemical preparations suitable for food and pharmaceutical purposes, considering the requirements for product safety, quality, and efficacy. Nevertheless, it should be emphasized that, at this time, the proposed technology is appropriate for small-scale production, mainly due to difficulties concerning scale-up, modeling, and the simulation of spouted bed systems, and also for predicting product properties and system behavior during operation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the aim to provide people with sustainable options, engineers are ethically required to hold the safety, health and welfare of the public paramount and to satisfy society's need for sustainable development. The global crisis and related sustainability challenges are calling for a fundamental change in culture, structures and practices. Sustainability Transitions (ST) have been recognized as promising frameworks for radical system innovation towards sustainability. In order to enhance the effectiveness of transformative processes, both the adoption of a transdisciplinary approach and the experimentation of practices are crucial. The evolution of approaches towards ST provides a series of inspiring cases which allow to identify advances in making sustainability transitions happen. In this framework, the thesis has emphasized the role of Transition Engineering (TE). TE adopts a transdisciplinary approach for engineering to face the sustainability challenges and address the risks of un-sustainability. With this purpose, a definition of Transition Technologies is provided as a valid instruments to contribute to ST. In the empirical section, several transition initiatives have been analysed especially at the urban level. As a consequence, the model of living-lab of sustainability has crucially emerged. Living-labs are environments in which innovative technologies and services are co-created with users active participation. In this framework, university can play a key role as learning organization. The core of the thesis has concerned the experimental application of transition approach within the School of Engineering and Architecture of University of Bologna at Terracini Campus. The final vision is to realize a living-lab of sustainability. Particularly, a Transition Team has been established and several transition experiments have been conducted. The final result is not only the improvement of sustainability and resilience of the Terracini Campus, but the demonstration that university can generate solutions and strategies that tackle the complex, dynamic factors fuelling the global crisis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Ph.D. research is comprised of three major components; (i) Characterization study to analyze the composition of defatted corn syrup (DCS) from a dry corn mill facility (ii) Hydrolysis experiments to optimize the production of fermentable sugars and amino acid platform using DCS and (iii) Sustainability analyses. Analyses of DCS included total solids, ash content, total protein, amino acids, inorganic elements, starch, total carbohydrates, lignin, organic acids, glycerol, and presence of functional groups. Total solids content was 37.4% (± 0.4%) by weight, and the mass balance closure was 101%. Total carbohydrates [27% (± 5%) wt.] comprised of starch (5.6%), soluble monomer carbohydrates (12%) and non-starch carbohydrates (10%). Hemicellulose components (structural and non-structural) were; xylan (6%), xylose (1%), mannan (1%), mannose (0.4%), arabinan (1%), arabinose (0.4%), galatactan (3%) and galactose (0.4%). Based on the measured physical and chemical components, bio-chemical conversion route and subsequent fermentation to value added products was identified as promising. DCS has potential to serve as an important fermentation feedstock for bio-based chemicals production. In the sugar hydrolysis experiments, reaction parameters such as acid concentration and retention time were analyzed to determine the optimal conditions to maximize monomer sugar yields while keeping the inhibitors at minimum. Total fermentable sugars produced can reach approximately 86% of theoretical yield when subjected to dilute acid pretreatment (DAP). DAP followed by subsequent enzymatic hydrolysis was most effective for 0 wt% acid hydrolysate samples and least efficient towards 1 and 2 wt% acid hydrolysate samples. The best hydrolysis scheme DCS from an industry's point of view is standalone 60 minutes dilute acid hydrolysis at 2 wt% acid concentration. The combined effect of hydrolysis reaction time, temperature and ratio of enzyme to substrate ratio to develop hydrolysis process that optimizes the production of amino acids in DCS were studied. Four key hydrolysis pathways were investigated for the production of amino acids using DCS. The first hydrolysis pathway is the amino acid analysis using DAP. The second pathway is DAP of DCS followed by protein hydrolysis using proteases [Trypsin, Pronase E (Streptomyces griseus) and Protex 6L]. The third hydrolysis pathway investigated a standalone experiment using proteases (Trypsin, Pronase E, Protex 6L, and Alcalase) on the DCS without any pretreatment. The final pathway investigated the use of Accellerase 1500® and Protex 6L to simultaneously produce fermentable sugars and amino acids over a 24 hour hydrolysis reaction time. The 3 key objectives of the techno-economic analysis component of this PhD research included; (i) Development of a process design for the production of both the sugar and amino acid platforms with DAP using DCS (ii) A preliminary cost analysis to estimate the initial capital cost and operating cost of this facility (iii) A greenhouse gas analysis to understand the environmental impact of this facility. Using Aspen Plus®, a conceptual process design has been constructed. Finally, both Aspen Plus Economic Analyzer® and Simapro® sofware were employed to conduct the cost analysis as well as the carbon footprint emissions of this process facility respectively. Another section of my PhD research work focused on the life cycle assessment (LCA) of commonly used dairy feeds in the U.S. Greenhouse gas (GHG) emissions analysis was conducted for cultivation, harvesting, and production of common dairy feeds used for the production of dairy milk in the U.S. The goal was to determine the carbon footprint [grams CO2 equivalents (gCO2e)/kg of dry feed] in the U.S. on a regional basis, identify key inputs, and make recommendations for emissions reduction. The final section of my Ph.D. research work was an LCA of a single dairy feed mill located in Michigan, USA. The primary goal was to conduct a preliminary assessment of dairy feed mill operations and ultimately determine the GHG emissions for 1 kilogram of milled dairy feed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While the WTO agreements do not regulate the use of biotechnology per se, their rules can have a profound impact on the use of the technology for both commercial and non-commercial purposes. This book seeks to identify the challenges to international trade regulation that arise from biotechnology. The contributions examine whether existing international obligations of WTO Members are appropriate to deal with the issues arising for the use of biotechnology and whether there is a need for new international legal instruments, including a potential WTO Agreement on Biotechnology. They combine various perspectives on and topics relating to genetic engineering and trade, including human rights and gender; intellectual property rights; traditional knowledge and access and benefit sharing; food security, trade and agricultural production and food safety; and medical research, cloning and international trade.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Livelihood resilience draws attention to the factors and processes that keep livelihoods functioning despite change and thus enriches the livelihood approach which puts people, their differential capabilities to cope with shocks and how to reduce poverty and improve adaptive capacity at the centre of analysis. However, the few studies addressing resilience from a livelihood perspective take different approaches and focus only on some dimensions of livelihoods. This paper presents a framework that can be used for a comprehensive empirical analysis of livelihood resilience. We use a concept of resilience that considers agency as well as structure. A review of both theoretical and empirical literature related to livelihoods and resilience served as the basis to integrate the perspectives. The paper identifies the attributes and indicators of the three dimensions of resilience, namely, buffer capacity, self-organisation and capacity for learning. The framework has not yet been systematically tested; however, potentials and limitations of the components of the framework are explored and discussed by drawing on empirical examples from literature on farming systems. Besides providing a basis for applying the resilience concept in livelihood-oriented research, the framework offers a way to communicate with practitioners on identifying and improving the factors that build resilience. It can thus serve as a tool for monitoring the effectiveness of policies and practices aimed at building livelihood resilience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This review reports on the application of charge density analysis in the field of crystal engineering, which is one of the most growing and productive areas of the entire field of crystallography. While methods to calculate or measure electron density are not discussed in detail, the derived quantities and tools, useful for crystal engineering analyses, are presented and their applications in the recent literature are illustrated. Potential developments and future perspectives are also highlighted and critically discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 1933 public letter to Wilhelm Furtwängler, Joseph Goebbels synthesized the official understanding of the link between politics, art and society in the early steps of the Third Reich. By assuming the ethos of art, politics acquired a plastic agency to mold its objects —population and the state— as a unified entity in the form of a ‘national-popular community’ (Volksgemeinschaft); in turn, by infusing art with a political valence, it became part of a wider governmental apparatus that reshaped aesthetic discourses and practices. Similar remarks could be made about the ordering of cities and territories in this period. Dictatorial imaginations mobilized urbanism —including urban theory, urban design and planning— as a fundamental tool for social organization. Under their aegis the production of space became a moment in a wider production of society. Many authors suggest that this political-spatial nexus is intrinsic to modernity itself, beyond dictatorial regimes. In this light, I propose to use dictatorial urbanisms as an analytical opportunity to delve into some concealed features of modern urban design and planning. This chapter explores some of these aspects from a theoretical standpoint, focusing on the development of dictatorial planning mentalities and spatial rationalities and drawing links to other historical episodes in order to inscribe the former in a broader genealogy of urbanism. Needless to say, I don’t suggest that we use dictatorships as mere templates to understand modern productions of space. Instead, these cases provide a crude version of some fundamental drives in the operationalization of urbanism as an instrument of social regulation, showing how far the modern imagination of sociospatial orderings can go. Dictatorial urbanisms constituted a set of experiences where many dreams and aspirations of modern planning went to die. But not, as the conventional account would have it, because the former were the antithesis of the latter, but rather because they worked as the excess of a particular orientation of modern spatial governmentalities — namely, their focus on calculation, social engineering and disciplinary spatialities, and their attempt to subsume a wide range of everyday practices under institutional structuration by means of spatial mediations. In my opinion the interest of dictatorial urbanisms lies in their role as key regulatory episodes in a longer history of our urban present. They stand as a threshold between the advent of planning in the late 19th and early 20th century, and its final consolidation as a crucial state instrument after World War II. We need, therefore, to pay attention to these experiences vis-à-vis the alleged ‘normal’ development of the field in contemporary democratic countries in order to develop a full comprehension thereof.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper shows the importance of a holistic comprehension of the Earth as a living planet, where man inhabits and is exposed to environmental incidences of different nature. The aim of the paper here summarized is a reflection on all these concepts and scientific considerations related to the important role of men in the handling of natural hazards. Our Planet is an unstable and dynamical system highly sensitive to initial conditions, as proposed by Chaos theory (González-Miranda 2004); it is a complex organic whole, which responds to minimal variations which can affect several natural phenomena such as plate tectonics, solar flares, fluid turbulences, landscape formation, forest fires, growth and migration of populations and biological evolution. This is known as the “butterfly effect” (Lorenz 1972), which means that a small change of the system causes a chain of events leading to large-scale unpredictable consequences. The aim of this work is dwelling on the importance of the knowledge of these natural and catastrophic geological, biological and human systems so much sensible to equilibrium conditions, to prevent, avoid and mend their effects, and to face them in a resilient way

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dentro del campo de la ciudad como lugar se analiza el concepto de planificación territorial y planeamiento espacial. Flooding is one of the main risks associated to many urban settlements in Spain and, indeed, elsewhere. The location of cities has traditionally ignored this type of risk as other locational criteria prevailed (communications, crop yields, etc.). Defence engineering has been the customary way to offset the risk but, nowadays, the opportunity costs of engineering works in urban areas has highlighted the interest of “soft measures” based on prevention. Early warning systems plus development planning controls rank among the most favoured ones. This paper reflects the results of a recent EU-financed research project on alternative measures geared to the enhancement of urban resilience against flooding. A city study in Spain is used as example of those measures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El auge del "Internet de las Cosas" (IoT, "Internet of Things") y sus tecnologías asociadas han permitido su aplicación en diversos dominios de la aplicación, entre los que se encuentran la monitorización de ecosistemas forestales, la gestión de catástrofes y emergencias, la domótica, la automatización industrial, los servicios para ciudades inteligentes, la eficiencia energética de edificios, la detección de intrusos, la gestión de desastres y emergencias o la monitorización de señales corporales, entre muchas otras. La desventaja de una red IoT es que una vez desplegada, ésta queda desatendida, es decir queda sujeta, entre otras cosas, a condiciones climáticas cambiantes y expuestas a catástrofes naturales, fallos de software o hardware, o ataques maliciosos de terceros, por lo que se puede considerar que dichas redes son propensas a fallos. El principal requisito de los nodos constituyentes de una red IoT es que estos deben ser capaces de seguir funcionando a pesar de sufrir errores en el propio sistema. La capacidad de la red para recuperarse ante fallos internos y externos inesperados es lo que se conoce actualmente como "Resiliencia" de la red. Por tanto, a la hora de diseñar y desplegar aplicaciones o servicios para IoT, se espera que la red sea tolerante a fallos, que sea auto-configurable, auto-adaptable, auto-optimizable con respecto a nuevas condiciones que puedan aparecer durante su ejecución. Esto lleva al análisis de un problema fundamental en el estudio de las redes IoT, el problema de la "Conectividad". Se dice que una red está conectada si todo par de nodos en la red son capaces de encontrar al menos un camino de comunicación entre ambos. Sin embargo, la red puede desconectarse debido a varias razones, como que se agote la batería, que un nodo sea destruido, etc. Por tanto, se hace necesario gestionar la resiliencia de la red con el objeto de mantener la conectividad entre sus nodos, de tal manera que cada nodo IoT sea capaz de proveer servicios continuos, a otros nodos, a otras redes o, a otros servicios y aplicaciones. En este contexto, el objetivo principal de esta tesis doctoral se centra en el estudio del problema de conectividad IoT, más concretamente en el desarrollo de modelos para el análisis y gestión de la Resiliencia, llevado a la práctica a través de las redes WSN, con el fin de mejorar la capacidad la tolerancia a fallos de los nodos que componen la red. Este reto se aborda teniendo en cuenta dos enfoques distintos, por una parte, a diferencia de otro tipo de redes de dispositivos convencionales, los nodos en una red IoT son propensos a perder la conexión, debido a que se despliegan en entornos aislados, o en entornos con condiciones extremas; por otra parte, los nodos suelen ser recursos con bajas capacidades en términos de procesamiento, almacenamiento y batería, entre otros, por lo que requiere que el diseño de la gestión de su resiliencia sea ligero, distribuido y energéticamente eficiente. En este sentido, esta tesis desarrolla técnicas auto-adaptativas que permiten a una red IoT, desde la perspectiva del control de su topología, ser resiliente ante fallos en sus nodos. Para ello, se utilizan técnicas basadas en lógica difusa y técnicas de control proporcional, integral y derivativa (PID - "proportional-integral-derivative"), con el objeto de mejorar la conectividad de la red, teniendo en cuenta que el consumo de energía debe preservarse tanto como sea posible. De igual manera, se ha tenido en cuenta que el algoritmo de control debe ser distribuido debido a que, en general, los enfoques centralizados no suelen ser factibles a despliegues a gran escala. El presente trabajo de tesis implica varios retos que conciernen a la conectividad de red, entre los que se incluyen: la creación y el análisis de modelos matemáticos que describan la red, una propuesta de sistema de control auto-adaptativo en respuesta a fallos en los nodos, la optimización de los parámetros del sistema de control, la validación mediante una implementación siguiendo un enfoque de ingeniería del software y finalmente la evaluación en una aplicación real. Atendiendo a los retos anteriormente mencionados, el presente trabajo justifica, mediante una análisis matemático, la relación existente entre el "grado de un nodo" (definido como el número de nodos en la vecindad del nodo en cuestión) y la conectividad de la red, y prueba la eficacia de varios tipos de controladores que permiten ajustar la potencia de trasmisión de los nodos de red en respuesta a eventuales fallos, teniendo en cuenta el consumo de energía como parte de los objetivos de control. Así mismo, este trabajo realiza una evaluación y comparación con otros algoritmos representativos; en donde se demuestra que el enfoque desarrollado es más tolerante a fallos aleatorios en los nodos de la red, así como en su eficiencia energética. Adicionalmente, el uso de algoritmos bioinspirados ha permitido la optimización de los parámetros de control de redes dinámicas de gran tamaño. Con respecto a la implementación en un sistema real, se han integrado las propuestas de esta tesis en un modelo de programación OSGi ("Open Services Gateway Initiative") con el objeto de crear un middleware auto-adaptativo que mejore la gestión de la resiliencia, especialmente la reconfiguración en tiempo de ejecución de componentes software cuando se ha producido un fallo. Como conclusión, los resultados de esta tesis doctoral contribuyen a la investigación teórica y, a la aplicación práctica del control resiliente de la topología en redes distribuidas de gran tamaño. Los diseños y algoritmos presentados pueden ser vistos como una prueba novedosa de algunas técnicas para la próxima era de IoT. A continuación, se enuncian de forma resumida las principales contribuciones de esta tesis: (1) Se han analizado matemáticamente propiedades relacionadas con la conectividad de la red. Se estudia, por ejemplo, cómo varía la probabilidad de conexión de la red al modificar el alcance de comunicación de los nodos, así como cuál es el mínimo número de nodos que hay que añadir al sistema desconectado para su re-conexión. (2) Se han propuesto sistemas de control basados en lógica difusa para alcanzar el grado de los nodos deseado, manteniendo la conectividad completa de la red. Se han evaluado diferentes tipos de controladores basados en lógica difusa mediante simulaciones, y los resultados se han comparado con otros algoritmos representativos. (3) Se ha investigado más a fondo, dando un enfoque más simple y aplicable, el sistema de control de doble bucle, y sus parámetros de control se han optimizado empleando algoritmos heurísticos como el método de la entropía cruzada (CE, "Cross Entropy"), la optimización por enjambre de partículas (PSO, "Particle Swarm Optimization"), y la evolución diferencial (DE, "Differential Evolution"). (4) Se han evaluado mediante simulación, la mayoría de los diseños aquí presentados; además, parte de los trabajos se han implementado y validado en una aplicación real combinando técnicas de software auto-adaptativo, como por ejemplo las de una arquitectura orientada a servicios (SOA, "Service-Oriented Architecture"). ABSTRACT The advent of the Internet of Things (IoT) enables a tremendous number of applications, such as forest monitoring, disaster management, home automation, factory automation, smart city, etc. However, various kinds of unexpected disturbances may cause node failure in the IoT, for example battery depletion, software/hardware malfunction issues and malicious attacks. So, it can be considered that the IoT is prone to failure. The ability of the network to recover from unexpected internal and external failures is known as "resilience" of the network. Resilience usually serves as an important non-functional requirement when designing IoT, which can further be broken down into "self-*" properties, such as self-adaptive, self-healing, self-configuring, self-optimization, etc. One of the consequences that node failure brings to the IoT is that some nodes may be disconnected from others, such that they are not capable of providing continuous services for other nodes, networks, and applications. In this sense, the main objective of this dissertation focuses on the IoT connectivity problem. A network is regarded as connected if any pair of different nodes can communicate with each other either directly or via a limited number of intermediate nodes. More specifically, this thesis focuses on the development of models for analysis and management of resilience, implemented through the Wireless Sensor Networks (WSNs), which is a challenging task. On the one hand, unlike other conventional network devices, nodes in the IoT are more likely to be disconnected from each other due to their deployment in a hostile or isolated environment. On the other hand, nodes are resource-constrained in terms of limited processing capability, storage and battery capacity, which requires that the design of the resilience management for IoT has to be lightweight, distributed and energy-efficient. In this context, the thesis presents self-adaptive techniques for IoT, with the aim of making the IoT resilient against node failures from the network topology control point of view. The fuzzy-logic and proportional-integral-derivative (PID) control techniques are leveraged to improve the network connectivity of the IoT in response to node failures, meanwhile taking into consideration that energy consumption must be preserved as much as possible. The control algorithm itself is designed to be distributed, because the centralized approaches are usually not feasible in large scale IoT deployments. The thesis involves various aspects concerning network connectivity, including: creation and analysis of mathematical models describing the network, proposing self-adaptive control systems in response to node failures, control system parameter optimization, implementation using the software engineering approach, and evaluation in a real application. This thesis also justifies the relations between the "node degree" (the number of neighbor(s) of a node) and network connectivity through mathematic analysis, and proves the effectiveness of various types of controllers that can adjust power transmission of the IoT nodes in response to node failures. The controllers also take into consideration the energy consumption as part of the control goals. The evaluation is performed and comparison is made with other representative algorithms. The simulation results show that the proposals in this thesis can tolerate more random node failures and save more energy when compared with those representative algorithms. Additionally, the simulations demonstrate that the use of the bio-inspired algorithms allows optimizing the parameters of the controller. With respect to the implementation in a real system, the programming model called OSGi (Open Service Gateway Initiative) is integrated with the proposals in order to create a self-adaptive middleware, especially reconfiguring the software components at runtime when failures occur. The outcomes of this thesis contribute to theoretic research and practical applications of resilient topology control for large and distributed networks. The presented controller designs and optimization algorithms can be viewed as novel trials of the control and optimization techniques for the coming era of the IoT. The contributions of this thesis can be summarized as follows: (1) Mathematically, the fault-tolerant probability of a large-scale stochastic network is analyzed. It is studied how the probability of network connectivity depends on the communication range of the nodes, and what is the minimum number of neighbors to be added for network re-connection. (2) A fuzzy-logic control system is proposed, which obtains the desired node degree and in turn maintains the network connectivity when it is subject to node failures. There are different types of fuzzy-logic controllers evaluated by simulations, and the results demonstrate the improvement of fault-tolerant capability as compared to some other representative algorithms. (3) A simpler but more applicable approach, the two-loop control system is further investigated, and its control parameters are optimized by using some heuristic algorithms such as Cross Entropy (CE), Particle Swarm Optimization (PSO), and Differential Evolution (DE). (4) Most of the designs are evaluated by means of simulations, but part of the proposals are implemented and tested in a real-world application by combining the self-adaptive software technique and the control algorithms which are presented in this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Emotion is generally argued to be an influence on the behavior of life systems, largely concerning flexibility and adaptivity. The way in which life systems acts in response to a particular situations of the environment, has revealed the decisive and crucial importance of this feature in the success of behaviors. And this source of inspiration has influenced the way of thinking artificial systems. During the last decades, artificial systems have undergone such an evolution that each day more are integrated in our daily life. They have become greater in complexity, and the subsequent effects are related to an increased demand of systems that ensure resilience, robustness, availability, security or safety among others. All of them questions that raise quite a fundamental challenges in control design. This thesis has been developed under the framework of the Autonomous System project, a.k.a the ASys-Project. Short-term objectives of immediate application are focused on to design improved systems, and the approaching of intelligence in control strategies. Besides this, long-term objectives underlying ASys-Project concentrate on high order capabilities such as cognition, awareness and autonomy. This thesis is placed within the general fields of Engineery and Emotion science, and provides a theoretical foundation for engineering and designing computational emotion for artificial systems. The starting question that has grounded this thesis aims the problem of emotion--based autonomy. And how to feedback systems with valuable meaning has conformed the general objective. Both the starting question and the general objective, have underlaid the study of emotion, the influence on systems behavior, the key foundations that justify this feature in life systems, how emotion is integrated within the normal operation, and how this entire problem of emotion can be explained in artificial systems. By assuming essential differences concerning structure, purpose and operation between life and artificial systems, the essential motivation has been the exploration of what emotion solves in nature to afterwards analyze analogies for man--made systems. This work provides a reference model in which a collection of entities, relationships, models, functions and informational artifacts, are all interacting to provide the system with non-explicit knowledge under the form of emotion-like relevances. This solution aims to provide a reference model under which to design solutions for emotional operation, but related to the real needs of artificial systems. The proposal consists of a multi-purpose architecture that implement two broad modules in order to attend: (a) the range of processes related to the environment affectation, and (b) the range or processes related to the emotion perception-like and the higher levels of reasoning. This has required an intense and critical analysis beyond the state of the art around the most relevant theories of emotion and technical systems, in order to obtain the required support for those foundations that sustain each model. The problem has been interpreted and is described on the basis of AGSys, an agent assumed with the minimum rationality as to provide the capability to perform emotional assessment. AGSys is a conceptualization of a Model-based Cognitive agent that embodies an inner agent ESys, the responsible of performing the emotional operation inside of AGSys. The solution consists of multiple computational modules working federated, and aimed at conforming a mutual feedback loop between AGSys and ESys. Throughout this solution, the environment and the effects that might influence over the system are described as different problems. While AGSys operates as a common system within the external environment, ESys is designed to operate within a conceptualized inner environment. And this inner environment is built on the basis of those relevances that might occur inside of AGSys in the interaction with the external environment. This allows for a high-quality separate reasoning concerning mission goals defined in AGSys, and emotional goals defined in ESys. This way, it is provided a possible path for high-level reasoning under the influence of goals congruence. High-level reasoning model uses knowledge about emotional goals stability, letting this way new directions in which mission goals might be assessed under the situational state of this stability. This high-level reasoning is grounded by the work of MEP, a model of emotion perception that is thought as an analogy of a well-known theory in emotion science. The work of this model is described under the operation of a recursive-like process labeled as R-Loop, together with a system of emotional goals that are assumed as individual agents. This way, AGSys integrates knowledge that concerns the relation between a perceived object, and the effect which this perception induces on the situational state of the emotional goals. This knowledge enables a high-order system of information that provides the sustain for a high-level reasoning. The extent to which this reasoning might be approached is just delineated and assumed as future work. This thesis has been studied beyond a long range of fields of knowledge. This knowledge can be structured into two main objectives: (a) the fields of psychology, cognitive science, neurology and biological sciences in order to obtain understanding concerning the problem of the emotional phenomena, and (b) a large amount of computer science branches such as Autonomic Computing (AC), Self-adaptive software, Self-X systems, Model Integrated Computing (MIC) or the paradigm of models@runtime among others, in order to obtain knowledge about tools for designing each part of the solution. The final approach has been mainly performed on the basis of the entire acquired knowledge, and described under the fields of Artificial Intelligence, Model-Based Systems (MBS), and additional mathematical formalizations to provide punctual understanding in those cases that it has been required. This approach describes a reference model to feedback systems with valuable meaning, allowing for reasoning with regard to (a) the relationship between the environment and the relevance of the effects on the system, and (b) dynamical evaluations concerning the inner situational state of the system as a result of those effects. And this reasoning provides a framework of distinguishable states of AGSys derived from its own circumstances, that can be assumed as artificial emotion.