30 resultados para Controllability of systems
Resumo:
In this work we investigate the stochastic behavior of a large class of systems with variable damping which are described by a time-dependent Lagrangian. Our stochastic approach is based on the Langevin treatment describing the motion of a classical Brownian particle of mass m. Two situations of physical interest are considered. In the first one, we discuss in detail an application of the standard Langevin treatment (white noise) for the variable damping system. In the second one, a more general viewpoint is adopted by assuming a given expression to the so-called collored noise. For both cases, the basic diffententiaql equations are analytically solved and al the quantities physically relevant are explicitly determined. The results depend on an arbitrary q parameter measuring how the behavior of the system departs from the standard brownian particle with constant viscosity. Several types of sthocastic behavior (superdiffusive and subdiffusive) are obteinded when the free pamameter varies continuosly. However, all the results of the conventional Langevin approach with constant damping are recovered in the limit q = 1
Resumo:
Leather tanneries generate effluents with high content of heavy metals, especially chromium, which is used in the mineral tanning process. Microemulsions have been studied in the extraction of heavy metals from aqueous solutions. Considering the problems related with the sediment resulting from the tanning process, due to its high content in chromium, in this work this sediment was characterized and microemulsion systems were applied for chromium removal. The extraction process consists in the removal of heavy metal ions present in an aqueous feeding solution (acid digestion solution) by a microemulsion system. First three different solid sludge digestion methods were evaluated, being chosen the method with higher digestion capacity. For this digestion method, seeking its optimization, was evaluated the influence of granule size, temperature and digestion time. Experimental results showed that the method proposed by USEPA (Method A) was the most efficient one, being obtained 95.77% of sample digestion. Regarding to the evaluated parameters, the best results were achieved at 95°C, 14 Mesh granule size, and 60 minutes digestion time. For chromium removal, three microemulsion extraction methods were evaluated: Method 1, in a Winsor II region, using as aqueous phase the acid digestion solution; Method 2, in a Winsor IV region, being obtained by the addition of the acid digestion solution to a microemulsion phase, whose aqueous phase is distilled water, until the formation of Winsor II system; and Method 3, in a Winsor III region, consisting in the formation of a Winsor III region using as aqueous phase the acid digestion solution, diluted in NaOH 0.01N. Seeking to optimize the extraction process only Method 1 (Systems I, II, and VIII) and Method 2 (System IX) were evaluated, being chosen points inside the interest regions (studied domains) to study the influence of contact time and pH in the extraction percentiles. The studied systems present the following compositions: System I: Surfactant Saponified coconut oil, Cosurfactant 1-Butanol, Oil phase Kerosene, Aqueous phase 2% NaCl solution; System II: Aqueous phase Acid digestion solution with pH adjusted using KOH (pH 3.5); System VIII: Aqueous phase - Acid digestion solution (pH 0.06); and System IX Aqueous phase Distilled water (pH 10.24), the other phases of Systems II, VIII and IX are similar to System I. Method 2 showed to be the more efficient one regarding chromium extraction percentile (up to 96.59% - pH 3.5). Considering that with Method 2 the microemulsion region only appears in the Winsor II region, it was studied Method 3 (System X) for the evaluation and characterization of a triphasic system, seeking to compare with a biphases system. System X is composed by: Surfactant Saponified coconut oil, Cosurfactant 1-Butanol, Oil phase Kerosene, Aqueous phase Acid digestion solution diluted with water and with its pH adjusted using 0.01N NaOH solution. The biphasic and triphasic microemulsion systems were analyzed regarding its viscosity, extraction efficiency and drop effective diameter. The experimental results showed that for viscosity studies the obtained values were low for all studied systems, the diameter of the drop is smaller in the Winsor II region, with 15.5 nm, reaching 46.0 nm in Winsor III region, being this difference attributed to variations in system compositions and micelle geometry. In chromium extraction, these points showed similar results, being achieved 99.76% for Winsor II system and 99.62% for Winsor III system. Winsor III system showed to be more efficient due to the obtaining of a icroemulsion with smaller volume, with the possibility to recover the oil phase in excess, and the use of a smaller proportion of surfactant and cosurfactant (C/S)
Resumo:
It is increasingly common use of a single computer system using different devices - personal computers, telephones cellular and others - and software platforms - systems graphical user interfaces, Web and other systems. Depending on the technologies involved, different software architectures may be employed. For example, in Web systems, it utilizes architecture client-server - usually extended in three layers. In systems with graphical interfaces, it is common architecture with the style MVC. The use of architectures with different styles hinders the interoperability of systems with multiple platforms. Another aggravating is that often the user interface in each of the devices have structure, appearance and behaviour different on each device, which leads to a low usability. Finally, the user interfaces specific to each of the devices involved, with distinct features and technologies is a job that needs to be done individually and not allow scalability. This study sought to address some of these problems by presenting a reference architecture platform-independent and that allows the user interface can be built from an abstract specification described in the language in the specification of the user interface, the MML. This solution is designed to offer greater interoperability between different platforms, greater consistency between the user interfaces and greater flexibility and scalability for the incorporation of new devices
Resumo:
The use of middleware technology in various types of systems, in order to abstract low-level details related to the distribution of application logic, is increasingly common. Among several systems that can be benefited from using these components, we highlight the distributed systems, where it is necessary to allow communications between software components located on different physical machines. An important issue related to the communication between distributed components is the provision of mechanisms for managing the quality of service. This work presents a metamodel for modeling middlewares based on components in order to provide to an application the abstraction of a communication between components involved in a data stream, regardless their location. Another feature of the metamodel is the possibility of self-adaptation related to the communication mechanism, either by updating the values of its configuration parameters, or by its replacement by another mechanism, in case of the restrictions of quality of service specified are not being guaranteed. In this respect, it is planned the monitoring of the communication state (application of techniques like feedback control loop), analyzing performance metrics related. The paradigm of Model Driven Development was used to generate the implementation of a middleware that will serve as proof of concept of the metamodel, and the configuration and reconfiguration policies related to the dynamic adaptation processes. In this sense was defined the metamodel associated to the process of a communication configuration. The MDD application also corresponds to the definition of the following transformations: the architectural model of the middleware in Java code, and the configuration model to XML
Resumo:
There is a need for multi-agent system designers in determining the quality of systems in the earliest phases of the development process. The architectures of the agents are also part of the design of these systems, and therefore also need to have their quality evaluated. Motivated by the important role that emotions play in our daily lives, embodied agents researchers have aimed to create agents capable of producing affective and natural interaction with users that produces a beneficial or desirable result. For this, several studies proposing architectures of agents with emotions arose without the accompaniment of appropriate methods for the assessment of these architectures. The objective of this study is to propose a methodology for evaluating architectures emotional agents, which evaluates the quality attributes of the design of architectures, in addition to evaluation of human-computer interaction, the effects on the subjective experience of users of applications that implement it. The methodology is based on a model of well-defined metrics. In assessing the quality of architectural design, the attributes assessed are: extensibility, modularity and complexity. In assessing the effects on users' subjective experience, which involves the implementation of the architecture in an application and we suggest to be the domain of computer games, the metrics are: enjoyment, felt support, warm, caring, trust, cooperation, intelligence, interestingness, naturalness of emotional reactions, believabiliy, reducing of frustration and likeability, and the average time and average attempts. We experimented with this approach and evaluate five architectures emotional agents: BDIE, DETT, Camurra-Coglio, EBDI, Emotional-BDI. Two of the architectures, BDIE and EBDI, were implemented in a version of the game Minesweeper and evaluated for human-computer interaction. In the results, DETT stood out with the best architectural design. Users who have played the version of the game with emotional agents performed better than those who played without agents. In assessing the subjective experience of users, the differences between the architectures were insignificant
Resumo:
Removing inconsistencies in a project is a less expensive activity when done in the early steps of design. The use of formal methods improves the understanding of systems. They have various techniques such as formal specification and verification to identify these problems in the initial stages of a project. However, the transformation from a formal specification into a programming language is a non-trivial task and error prone, specially when done manually. The aid of tools at this stage can bring great benefits to the final product to be developed. This paper proposes the extension of a tool whose focus is the automatic translation of specifications written in CSPM into Handel-C. CSP is a formal description language suitable for concurrent systems, and CSPM is the notation used in tools support. Handel-C is a programming language whose result can be compiled directly into FPGA s. Our extension increases the number of CSPM operators accepted by the tool, allowing the user to define local processes, to rename channels in a process and to use Boolean guards on external choices. In addition, we also propose the implementation of a communication protocol that eliminates some restrictions on parallel composition of processes in the translation into Handel-C, allowing communication in a same channel between multiple processes to be mapped in a consistent manner and that improper communication in a channel does not ocurr in the generated code, ie, communications that are not allowed in the system specification
Resumo:
The approach Software Product Line (SPL) has become very promising these days, since it allows the production of customized systems on large scale through product families. For the modeling of these families the Features Model is being widely used, however, it is a model that has low level of detail and not may be sufficient to guide the development team of LPS. Thus, it is recommended add the Features Model to other models representing the system from other perspectives. The goals model PL-AOVgraph can assume this role complementary to the Features Model, since it has a to context oriented language of LPS's, which allows the requirements modeling in detail and identification of crosscutting concerns that may arise as result of variability. In order to insert PL-AOVgraph in development of LPS's, this paper proposes a bi-directional mapping between PL-AOVgraph and Features Model, which will be automated by tool ReqSys-MDD. This tool uses the approach of Model-Driven Development (MDD), which allows the construction of systems from high level models through successive transformations. This enables the integration of ReqSys-MDD with other tools MDD that use their output models as input to other transformations. So it is possible keep consistency among the models involved, avoiding loss of informations on transitions between stages of development
Resumo:
The increasing complexity of integrated circuits has boosted the development of communications architectures like Networks-on-Chip (NoCs), as an architecture; alternative for interconnection of Systems-on-Chip (SoC). Networks-on-Chip complain for component reuse, parallelism and scalability, enhancing reusability in projects of dedicated applications. In the literature, lots of proposals have been made, suggesting different configurations for networks-on-chip architectures. Among all networks-on-chip considered, the architecture of IPNoSys is a non conventional one, since it allows the execution of operations, while the communication process is performed. This study aims to evaluate the execution of data-flow based applications on IPNoSys, focusing on their adaptation against the design constraints. Data-flow based applications are characterized by the flowing of continuous stream of data, on which operations are executed. We expect that these type of applications can be improved when running on IPNoSys, because they have a programming model similar to the execution model of this network. By observing the behavior of these applications when running on IPNoSys, were performed changes in the execution model of the network IPNoSys, allowing the implementation of an instruction level parallelism. For these purposes, analysis of the implementations of dataflow applications were performed and compared
Resumo:
The field of Wireless Sensor and Actuator Networks (WSAN) is fast increasing and has attracted the interest of both the research community and the industry because of several factors, such as the applicability of such networks in different application domains (aviation, civil engineering, medicine, and others). Moreover, advances in wireless communication and the reduction of hardware components size also contributed for a fast spread of these networks. However, there are still several challenges and open issues that need to be tackled in order to achieve the full potential of WSAN usage. The development of WSAN systems is one of the most relevant of these challenges considering the number of variables involved in this process. Currently, a broad range of WSAN platforms and low level programming languages are available to build WSAN systems. Thus, developers need to deal with details of different sensor platforms and low-level programming abstractions of sensor operational systems on one hand, and they also need to have specific (high level) knowledge about the distinct application domains, on the other hand. Therefore, in order to decouple the handling of these two different levels of knowledge, making easier the development process of WSAN systems, we propose LWiSSy (Domain Language for Wireless Sensor and Actuator Networks Systems), a domain specific language (DSL) for WSAN. The use of DSLs raises the abstraction level during the programming of systems and modularizes the system building in several steps. Thus, LWiSSy allows the domain experts to directly contribute in the development of WSANs without having knowledge on low level sensor platforms, and network experts to program sensor nodes to meet application requirements without having specific knowledge on the application domain. Additionally, LWiSSy enables the system decomposition in different levels of abstraction according to structural and behavioral features and granularities (network, node group and single node level programming)
Resumo:
The Reconfigurable Computing is an intermediate solution at the resolution of complex problems, making possible to combine the speed of the hardware with the flexibility of the software. An reconfigurable architecture possess some goals, among these the increase of performance. The use of reconfigurable architectures to increase the performance of systems is a well known technology, specially because of the possibility of implementing certain slow algorithms in the current processors directly in hardware. Amongst the various segments that use reconfigurable architectures the reconfigurable processors deserve a special mention. These processors combine the functions of a microprocessor with a reconfigurable logic and can be adapted after the development process. Reconfigurable Instruction Set Processors (RISP) are a subgroup of the reconfigurable processors, that have as goal the reconfiguration of the instruction set of the processor, involving issues such formats, operands and operations of the instructions. This work possess as main objective the development of a RISP processor, combining the techniques of configuration of the set of executed instructions of the processor during the development, and reconfiguration of itself in execution time. The project and implementation in VHDL of this RISP processor has as intention to prove the applicability and the efficiency of two concepts: to use more than one set of fixed instructions, with only one set active in a given time, and the possibility to create and combine new instructions, in a way that the processor pass to recognize and use them in real time as if these existed in the fixed set of instruction. The creation and combination of instructions is made through a reconfiguration unit, incorporated to the processor. This unit allows the user to send custom instructions to the processor, so that later he can use them as if they were fixed instructions of the processor. In this work can also be found simulations of applications involving fixed and custom instructions and results of the comparisons between these applications in relation to the consumption of power and the time of execution, which confirm the attainment of the goals for which the processor was developed
Uma abordagem para a verificação do comportamento excepcional a partir de regras de designe e testes
Resumo:
Checking the conformity between implementation and design rules in a system is an important activity to try to ensure that no degradation occurs between architectural patterns defined for the system and what is actually implemented in the source code. Especially in the case of systems which require a high level of reliability is important to define specific design rules for exceptional behavior. Such rules describe how exceptions should flow through the system by defining what elements are responsible for catching exceptions thrown by other system elements. However, current approaches to automatically check design rules do not provide suitable mechanisms to define and verify design rules related to the exception handling policy of applications. This paper proposes a practical approach to preserve the exceptional behavior of an application or family of applications, based on the definition and runtime automatic checking of design rules for exception handling of systems developed in Java or AspectJ. To support this approach was developed, in the context of this work, a tool called VITTAE (Verification and Information Tool to Analyze Exceptions) that extends the JUnit framework and allows automating test activities to exceptional design rules. We conducted a case study with the primary objective of evaluating the effectiveness of the proposed approach on a software product line. Besides this, an experiment was conducted that aimed to realize a comparative analysis between the proposed approach and an approach based on a tool called JUnitE, which also proposes to test the exception handling code using JUnit tests. The results showed how the exception handling design rules evolve along different versions of a system and that VITTAE can aid in the detection of defects in exception handling code
Resumo:
The software systems development with domain-specific languages has become increasingly common. Domain-specific languages (DSLs) provide increased of the domain expressiveness, raising the abstraction level by facilitating the generation of models or low-level source code, thus increasing the productivity of systems development. Consequently, methods for the development of software product lines and software system families have also proposed the adoption of domain-specific languages. Recent studies have investigated the limitations of feature model expressiveness and proposing the use of DSLs as a complement or substitute for feature model. However, in complex projects, a single DSL is often insufficient to represent the different views and perspectives of development, being necessary to work with multiple DSLs. In order to address new challenges in this context, such as the management of consistency between DSLs, and the need to methods and tools that support the development with multiple DSLs, over the past years, several approaches have been proposed for the development of generative approaches. However, none of them considers matters relating to the composition of DSLs. Thus, with the aim to address this problem, the main objectives of this dissertation are: (i) to investigate the adoption of the integrated use of feature models and DSLs during the domain and application engineering of the development of generative approaches; (ii) to propose a method for the development of generative approaches with composition DSLs; and (iii) to investigate and evaluate the usage of modern technology based on models driven engineering to implement strategies of integration between feature models and composition of DSLs
Resumo:
The component-based development of systems revolutionized the software development process, facilitating the maintenance, providing more confiability and reuse. Nevertheless, even with all the advantages of the development of components, their composition is an important concern. The verification through informal tests is not enough to achieve a safe composition, because they are not based on formal semantic models with which we are able to describe precisally a system s behaviour. In this context, formal methods provide ways to accurately specify systems through mathematical notations providing, among other benefits, more safety. The formal method CSP enables the specification of concurrent systems and verification of properties intrinsic to them, as well as the refinement among different models. Some approaches apply constraints using CSP, to check the behavior of composition between components, assisting in the verification of those components in advance. Hence, aiming to assist this process, considering that the software market increasingly requires more automation, reducing work and providing agility in business, this work presents a tool that automatizes the verification of composition among components, in which all complexity of formal language is kept hidden from users. Thus, through a simple interface, the tool BST (BRIC-Tool-Suport) helps to create and compose components, predicting, in advance, undesirable behaviors in the system, such as deadlocks
Resumo:
The Caatinga biome has a high diversity of potential and their conservation constitutes one of the greatest challenges of Brazilian science. The sustainable management of the Caatinga emerges as an alternative that through the formation of systems agrossilvipastoris, enables the use of forest resources sustainably, ensuring their conservation, regeneration and recovery. In RN this technique has been developed mainly in settlements of Agrarian Reform, such as P. A. Moaci Lucena, and their impacts go beyond the environmental aspect and reverberate socially and economically on the quality of life of family farmers. Despite the efficiency of the Sustainable Management of the Caatinga in the conservation of native species, many forests species of this biome faces serious problems of propagation and for this reason have become vulnerable to extinction, as is the case of Mimosa caesalpiniifolia Benth . Thus , it is evident the need to use sustainable alternatives to overcome the difficulties of propagation of this species and enable their replacement in areas where their existence is threatened. The Plant Biotechnology is considered as a promising alternative in this sense, considering that by micropropagation enables the large-scale production of seedlings with high health genetics status. This work has the following objectives: evaluate the perception of family farmers of P. A. Moaci Lucena in relation to social, environmental and economic impacts of the Sustainable Management of the Caatinga and check the conditions of germination and in vitro propagation of Mimosa caesalpiniifolia Benth that enabling the production of seedlings of this specie on a large scale. To achieve the first objective, semi-structured interviews showed that in the perception of farmers PA Moaci Lucena, the Sustainable Management of the Caatinga was responsible for generating many social, environmental and economic impacts that affected directly in the improvement in the quality of life of the families of the Settlement Project Moaci Lucena. Have to achieve the second objective, were investigated the influence of different substrates and concentrations of growth regulator BAP in the germination and shoot induction in vitro of Mimosa caesalpiniifolia Benth. The vermiculite was presented as the most suitable substrate for germination of this species, because it provided a more rapid germination, higher growth rates and higher dry matter accumulation. Regarding micropropagation, the concentration of 17.76 μmol/L of BAP presented a more responsive in relation to multiplication rate and the number of shoots in M. caesalpiniifolia, thus constituting the most suitable concentration for the in vitro propagation of this specie
Resumo:
Transport systems involved the use of territory in different Brazilian cities with regard to the occupation of road systems in urban areas. The implementation of systems engineering and transport infrastructure such as roads (roads), signs, stops, stations and complex road (bridges, viaducts and tunnels) are not used in the same way in the area. The subway is not even use the bus and vice versa. The time spent in travel, the time to access and the number of trips made by passengers in each way of transport is not the same. The use of transport systems in the territory, therefore, takes place through a whole in the current period we are in the technicalscientific and informational. This work addresses, however, the area used as a synonym of geographical area, analyzed by two categories of analysis, systems of objects formed by the fixed and the systems formed by the action flows. The system analyzed is the public transport by bus and population displacement that makes using this medium with source destination from home to work and has as empirical cut the Lagoa Azul located in the district administrative area north of Natal / RN. The general objective of this research is to understand the extent to which public transport has contributed to the socio-spatial accessibility of the residents of Barrio Blue Lagoon, located in Natal-RN, emphasizing the way home and the workplace. To reach the general objective of this dissertation, a study was made in light of the line which the methodological empirical facts, statistical data and theoretical knowledge of the events that occur in the quarter related to the Lagoa Azul economic aspects. Use for this, the concepts of mobility and Accessibility