216 resultados para MESTRADO EM SISTEMAS DE COMUNICA
Resumo:
Corrosion is a natural process that causes progressive deterioration of materials, so, reducing the corrosive effects is a major objective of development of scientific studies. In this work, the efficiency of corrosion inhibition on a AISI 1018 carbon steel of the nanoemulsion system containing the oil of the seeds of Azadirachta indica A. Juss (SNEOAI) was evaluated by the techniques of linear polarization resistance (LPR) and weight loss (CPM), a instrumented cell. For that, hydroalcoholic extract of leaves of A. indica (EAI) was solubilized in a nanoemulsion system (SNEOAI) of which O/W system (rich in aqueous phase). This nanoemulsion system (tested in different concentrations) was obtained with oil from the seeds of this plant species (OAI) (oil phase), dodecylammonium chloride (DDAC) (surfactant), butanol (cosurfactant) and water, using 30 % of C/T (cosurfactant/surfactant), 0.5 % of oil phase and 69.5 % of aqueous phase, and characterized by surface tension, rheology and droplet sizes. This systems SNEOAI and SNEOAI-EAI (nanoemulsion containing hydroalcoholic extract - EAI) showed inhibition efficiencies in corrosive environment in saline (1 %), for the method of LPR with significant value of 70.58 % (300 ppm) to SNEOAI, 74.17 % (100 ppm) and 72.51 % (150 ppm) to SNEOAI-EAI. The best efficiencies inhibitions were observed for the method of CPM with 85.41 % for the SNEOAI (300 ppm) and 83.19 % SNEOAI-EAI (500 ppm). The results show that this formulation could be used commercially for use as a corrosion inhibitor, this research contributed to the biotechnological applicability of Azadirachta indica, considering the large use of this plant species rich in limonoids (tetranortriterpenoids), especially azadirachtin
Resumo:
In this research the removal of light and heavy oil from disintegrated limestone was investigated with use of microemulsions. These chemical systems were composed by surfactant, cosurfactant, oil phase and aqueous phase. In the studied systems, three points in the water -rich microemulsion region of the phase diagrams were used in oil removal experiments. These microemulsion systems were characterized to evaluate the influence of particle size, surface tension, density and viscosity in micellar stability and to understand how the physical properties can influence the oil recovery process. The limestone rock sample was characterized by thermogravimetry, BET area, scanning electron microscopy and X-ray fluorescence. After preparation, the rock was placed in contact with light and heavy oil solutions to allow oil adsorption. The removal tests were performed to evaluate the influence of contact time (1 minute, 30 minutes, 60 minutes and 120 minutes), the concentration of active matter (20, 30 and 40 %), different cosurfactants and different oil phases. For the heavy oil, the best result was on SME 1, with 20 % of active matter, 1 minute of contact time, with efficiency of 93,33 %. For the light oil, also the SME 1, with 20 % of active matter, 120 minutes of contact time, with 62,38 % of efficiency. From the obtained results, it was possible to conclude that microemulsions can be considered as efficient chemical systems for oil removal from limestone formations
Resumo:
During the storage of oil, sludge is formed in the bottoms of tanks, due to decantation, since the sludge is composed of a large quantity of oil (heavy petroleum fractions), water and solids. The oil sludge is a complex viscous mixture which is considered as a hazardous waste. It is then necessary to develop methods and technologies that optimize the cleaning process, oil extraction and applications in industry. Therefore, this study aimed to determine the composition of the oil sludge, to obtain and characterize microemulsion systems (MES), and to study their applications in the treatment of sludge. In this context, the Soxhlet extraction of crude oil sludge and aged sludge was carried out, and allowing to quantify the oil (43.9 % and 84.7 % - 13 ºAPI), water (38.7 % and 9.15 %) and solid (17.3 % and 6.15 %) contents, respectively. The residues were characterized using the techniques of X-ray fluorescence (XRF), Xray diffraction (XRD) and transmission Infrared (FT-IR). The XRF technique determined the presence of iron and sulfur in higher proportions, confirming by XRD the presence of the following minerals: Pyrite (FeS2), Pyrrhotite (FeS) and Magnetite (Fe3O4). The FT-IR showed the presence of heavy oil fractions. In parallel, twelve MES were prepared, combining the following constituents: two nonionic surfactants (Unitol L90 and Renex 110 - S), three cosurfactants (butanol, sec-butanol and isoamyl alcohol - C), three aqueous phase (tap water - ADT, acidic solution 6 % HCl, and saline solution - 3.5 % NaCl - AP) and an oil phase (kerosene - OP). From the obtained systems, a common point was chosen belonging to the microemulsion region (25 % [C+S] 5 % OP and AP 70 %), which was characterized at room temperature (25°C) by viscosity (Haake Rheometer Mars), particle diameter (Zeta Plus) and thermal stability. Mixtures with this composition were applied to oil sludge solubilization under agitation at a ratio of 1:4, by varying time and temperature. The efficiencies of solubilization were obtained excluding the solids, which ranged between 73.5 % and 95 %. Thus, two particular systems were selected for use in storage tanks, with efficiencies of oil sludge solubilization over 90 %, which proved the effectiveness of the MES. The factorial design delimited within the domain showed how the MES constituents affect the solubilization of aged oil sludge, as predictive models. The MES A was chosen as the best system, which solubilized a high amount of aged crude oil sludge (~ 151.7 g / L per MES)
Resumo:
The knowledge of the rheological behavior of microemulsionated systems (SME) is of fundamental importance due to the diversity of industrial applications of these systems. This dissertation presents the rheological behavior of the microemulsionated system formed by RNX 95/alcohol isopropyl/p-toulen sodium sulfonate/kerosene/distilled water with the addition of polyacrylamide polymer. It was chosen three polymers of the polyacrylamide type, which differ in molar weight and charge density. It was studied the addition of these polymers in relatively small concentration 0,1% in mass and maximum concentration of 2,0%. It was made analysis of flow to determine the appearing viscosities of the SME and rheological parameters applying Bingham, Ostwald de Waale and Herschell-Buckley models. The behavior into saline environment of this system was studied for a solution of KCl 2,0%, replacing the distilled water. It was determined the behavior of microemulsions in relation with the temperature through curves of flow in temperatures of 25 to 60ºC in variations of 5ºC. After the analysis of the results the microemulsion without the addition of polymer presented a slight increase in its viscosity, but it does not mischaracterize it as a Newtonian fluid. However the additive systems when analyzed with low concentration of polymer adjusted well to the applied models, with a very close behavior of microemulsion. The higher concentration of the polymer gave the systems a behavior of plastic fluid. The results of the temperature variation point to an increase of viscosity in the systems that can be related to structural changes in the micelles formed in the own microemulsion without the addition of polymer
Resumo:
The Layered Double Hydroxides has become extremely promising materials due to its range of applications, easily obtained in the laboratory and reusability after calcination, so the knowledge regarding their properties is of utmost importance. In this study were synthesized layered double hydroxides of two systems, Mg-Al and Zn-Al, and such materials were analyzed with X-ray diffraction and, from these data, we determined the volume density, planar atomic density, size crystallite, lattice parameters, interplanar spacing and interlayer space available. Such materials were also subjected to thermogravimetric analysis reasons for heating 5, 10, 20 and 25 ° C / min to determine kinetic parameters for the formation of metaphases HTD and HTB based on theoretical models Ozawa, Flynn-Wall Starink and Model Free Kinetics. In addition, the layered double hydroxides synthesized in this working ratios were calcined heating 2.5 ° C / min and 20 ° C / min, and tested for adsorption of nitrate anion in aqueous solution batch system at time intervals 5 min, 15 min, 30 min, 1h, 2h and 4h. Such calcined materials were also subjected to exposure to the atmosphere and at intervals of 1 week, 2 weeks and 1 month were analyzed by infrared spectroscopy to study the kinetics of regeneration determining structural called "memory effect"
Resumo:
Although some individual techniques of supervised Machine Learning (ML), also known as classifiers, or algorithms of classification, to supply solutions that, most of the time, are considered efficient, have experimental results gotten with the use of large sets of pattern and/or that they have a expressive amount of irrelevant data or incomplete characteristic, that show a decrease in the efficiency of the precision of these techniques. In other words, such techniques can t do an recognition of patterns of an efficient form in complex problems. With the intention to get better performance and efficiency of these ML techniques, were thought about the idea to using some types of LM algorithms work jointly, thus origin to the term Multi-Classifier System (MCS). The MCS s presents, as component, different of LM algorithms, called of base classifiers, and realized a combination of results gotten for these algorithms to reach the final result. So that the MCS has a better performance that the base classifiers, the results gotten for each base classifier must present an certain diversity, in other words, a difference between the results gotten for each classifier that compose the system. It can be said that it does not make signification to have MCS s whose base classifiers have identical answers to the sames patterns. Although the MCS s present better results that the individually systems, has always the search to improve the results gotten for this type of system. Aim at this improvement and a better consistency in the results, as well as a larger diversity of the classifiers of a MCS, comes being recently searched methodologies that present as characteristic the use of weights, or confidence values. These weights can describe the importance that certain classifier supplied when associating with each pattern to a determined class. These weights still are used, in associate with the exits of the classifiers, during the process of recognition (use) of the MCS s. Exist different ways of calculating these weights and can be divided in two categories: the static weights and the dynamic weights. The first category of weights is characterizes for not having the modification of its values during the classification process, different it occurs with the second category, where the values suffers modifications during the classification process. In this work an analysis will be made to verify if the use of the weights, statics as much as dynamics, they can increase the perfomance of the MCS s in comparison with the individually systems. Moreover, will be made an analysis in the diversity gotten for the MCS s, for this mode verify if it has some relation between the use of the weights in the MCS s with different levels of diversity
Resumo:
In systems that combine the outputs of classification methods (combination systems), such as ensembles and multi-agent systems, one of the main constraints is that the base components (classifiers or agents) should be diverse among themselves. In other words, there is clearly no accuracy gain in a system that is composed of a set of identical base components. One way of increasing diversity is through the use of feature selection or data distribution methods in combination systems. In this work, an investigation of the impact of using data distribution methods among the components of combination systems will be performed. In this investigation, different methods of data distribution will be used and an analysis of the combination systems, using several different configurations, will be performed. As a result of this analysis, it is aimed to detect which combination systems are more suitable to use feature distribution among the components
Resumo:
Pervasive applications use context provision middleware support as infrastructures to provide context information. Typically, those applications use communication publish/subscribe to eliminate the direct coupling between components and to allow the selective information dissemination based in the interests of the communicating elements. The use of composite events mechanisms together with such middlewares to aggregate individual low level events, originating from of heterogeneous sources, in high level context information relevant for the application. CES (Composite Event System) is a composite events mechanism that works simultaneously in cooperation with several context provision middlewares. With that integration, applications use CES to subscribe to composite events and CES, in turn, subscribes to the primitive events in the appropriate underlying middlewares and notifies the applications when the composed events happen. Furthermore, CES offers a language with a group of operators for the definition of composite events that also allows context information sharing
Resumo:
The increase of capacity to integrate transistors permitted to develop completed systems, with several components, in single chip, they are called SoC (System-on-Chip). However, the interconnection subsystem cans influence the scalability of SoCs, like buses, or can be an ad hoc solution, like bus hierarchy. Thus, the ideal interconnection subsystem to SoCs is the Network-on-Chip (NoC). The NoCs permit to use simultaneous point-to-point channels between components and they can be reused in other projects. However, the NoCs can raise the complexity of project, the area in chip and the dissipated power. Thus, it is necessary or to modify the way how to use them or to change the development paradigm. Thus, a system based on NoC is proposed, where the applications are described through packages and performed in each router between source and destination, without traditional processors. To perform applications, independent of number of instructions and of the NoC dimensions, it was developed the spiral complement algorithm, which finds other destination until all instructions has been performed. Therefore, the objective is to study the viability of development that system, denominated IPNoSys system. In this study, it was developed a tool in SystemC, using accurate cycle, to simulate the system that performs applications, which was implemented in a package description language, also developed to this study. Through the simulation tool, several result were obtained that could be used to evaluate the system performance. The methodology used to describe the application corresponds to transform the high level application in data-flow graph that become one or more packages. This methodology was used in three applications: a counter, DCT-2D and float add. The counter was used to evaluate a deadlock solution and to perform parallel application. The DCT was used to compare to STORM platform. Finally, the float add aimed to evaluate the efficiency of the software routine to perform a unimplemented hardware instruction. The results from simulation confirm the viability of development of IPNoSys system. They showed that is possible to perform application described in packages, sequentially or parallelly, without interruptions caused by deadlock, and also showed that the execution time of IPNoSys is more efficient than the STORM platform
Resumo:
It is increasingly common use of a single computer system using different devices - personal computers, telephones cellular and others - and software platforms - systems graphical user interfaces, Web and other systems. Depending on the technologies involved, different software architectures may be employed. For example, in Web systems, it utilizes architecture client-server - usually extended in three layers. In systems with graphical interfaces, it is common architecture with the style MVC. The use of architectures with different styles hinders the interoperability of systems with multiple platforms. Another aggravating is that often the user interface in each of the devices have structure, appearance and behaviour different on each device, which leads to a low usability. Finally, the user interfaces specific to each of the devices involved, with distinct features and technologies is a job that needs to be done individually and not allow scalability. This study sought to address some of these problems by presenting a reference architecture platform-independent and that allows the user interface can be built from an abstract specification described in the language in the specification of the user interface, the MML. This solution is designed to offer greater interoperability between different platforms, greater consistency between the user interfaces and greater flexibility and scalability for the incorporation of new devices
Resumo:
To manage the complexity associated with the management of multimedia distributed systems, a solution must incorporate concepts of middleware in order to hide specific hardware and operating systems aspects. Applications in these systems can be implemented in different types of platforms, and the components of these systems must interact each with the other. Because of the variability of the state of the platforms implementation, a flexible approach should allow dynamic substitution of components in order to ensure the level of QoS of the running application . In this context, this work presents an approach in the layer of middleware that we are proposing for supporting dynamic substitution of components in the context the Cosmos framework , starting with the choice of target component, rising taking the decision, which, among components candidates will be chosen and concluding with the process defined for the exchange. The approach was defined considering the Cosmos QoS model and how it deals with dynamic reconfiguration
Resumo:
In this work will applied the technique of Differential Cryptanalysis, introduced in 1990 by Biham and Shamir, on Papílio s cryptosystem, developed by Karla Ramos, to test and most importantly, to prove its relevance to other block ciphers such as DES, Blowfish and FEAL-N (X). This technique is based on the analysis of differences between plaintext and theirs respective ciphertext, in search of patterns that will assist in the discovery of the subkeys and consequently in the discovery of master key. These differences are obtained by XOR operations. Through this analysis, in addition to obtaining patterns of Pap´ılio, it search to obtain also the main characteristics and behavior of Papilio throughout theirs 16 rounds, identifying and replacing when necessary factors that can be improved in accordance with pre-established definitions of the same, thus providing greater security in the use of his algoritm
Resumo:
Distributed multimedia systems have highly variable characteristics, resulting in new requirements while new technologies become available or in the need for adequacy in accordance with the amount of available resources. So, these systems should provide support for dynamic adaptations in order to adjust their structures and behaviors at runtime. This paper presents an approach to adaptation model-based and proposes a reflective and component-based framework for construction and support of self-adaptive distributed multimedia systems, providing many facilities for the development and evolution of such systems, such as dynamic adaptation. The propose is to keep one or more models to represent the system at runtime, so some external entity can perform an analysis of these models by identifying problems and trying to solve them. These models integrate the reflective meta-level, acting as a system self-representation. The framework defines a meta-model for description of self-adaptive distributed multimedia applications, which can represent components and their relationships, policies for QoS specification and adaptation actions. Additionally, this paper proposes an ADL and architecture for model-based adaptation. As a case study, this paper presents some scenarios to demonstrate the application of the framework in practice, with and without the use of ADL, as well as check some characteristics related to dynamic adaptation
Resumo:
This work proposes a model based approach for pointcut management in the presence of evolution in aspect oriented systems. The proposed approach, called conceptual visions based pointcuts, is motivated by the observation of the shortcomings in traditional approaches pointcuts definition, which generally refer directly to software structure and/or behavior, thereby creating a strong coupling between pointcut definition and the base code. This coupling causes the problem known as pointcut fragility problem and hinders the evolution of aspect-oriented systems. This problem occurs when all the pointcuts of each aspect should be reviewed due to any software changes/evolution, to ensure that they remain valid even after the changes made in the software. Our approach is focused on the pointcuts definition based on a conceptual model, which has definitions of the system's structure in a more abstract level. The conceptual model consists of classifications (called conceptual views) on entities of the business model elements based on common characteristics, and relationships between these views. Thus the pointcuts definitions are created based on the conceptual model rather than directly referencing the base model. Moreover, the conceptual model contains a set of relationships that allows it to be automatically verified if the classifications in the conceptual model remain valid even after a software change. To this end, all the development using the conceptual views based pointcuts approach is supported by a conceptual framework called CrossMDA2 and a development process based on MDA, both also proposed in this work. As proof of concept, we present two versions of a case study, setting up a scenario of evolution that shows how the use of conceptual visions based pointcuts helps detecting and minimizing the pointcuts fragility. For the proposal evaluation the Goal/Question/Metric (GQM) technique is used together with metrics for efficiency analysis in the pointcuts definition
Resumo:
The World Wide Web has been consolidated over the last years as a standard platform to provide software systems in the Internet. Nowadays, a great variety of user applications are available on the Web, varying from corporate applications to the banking domain, or from electronic commerce to the governmental domain. Given the quantity of information available and the quantity of users dealing with their services, many Web systems have sought to present recommendations of use as part of their functionalities, in order to let the users to have a better usage of the services available, based on their profile, history navigation and system use. In this context, this dissertation proposes the development of an agent-based framework that offers recommendations for users of Web systems. It involves the conception, design and implementation of an object-oriented framework. The framework agents can be plugged or unplugged in a non-invasive way in existing Web applications using aspect-oriented techniques. The framework is evaluated through its instantiation to three different Web systems