997 resultados para Sistemas expertos (Ciencias computacionales)
Resumo:
Among the new drugs launched into the market since 1980, up to 30% of them belong to the class of natural products or they have semisynthetic origin. Between 40-70% of the new chemical entities (or lead compounds) possess poor water solubility, which may impair their commercial use. An alternative for administration of poorly water-soluble drugs is their vehiculation into drug delivery systems like micelles, microemulsions, nanoparticles, liposomes, and cyclodextrin systems. In this work, microemulsion-based drug delivery systems were obtained using pharmaceutically acceptable components: a mixture Tween 80 and Span 20 in ratio 3:1 as surfactant, isopropyl mirystate or oleic acid as oil, bidistilled water, and ethanol, in some formulations, as cosurfactants. Self-Microemulsifying Drug Delivery Systems (SMEDDS) were also obtained using propylene glycol or sorbitol as cosurfactant. All formulations were characterized for rheological behavior, droplet size and electrical conductivity. The bioactive natural product trans-dehydrocrotonin, as well some extracts and fractions from Croton cajucara Benth (Euphorbiaceae), Anacardium occidentale L. (Anacardiaceae) e Phyllanthus amarus Schum. & Thonn. (Euphorbiaceae) specimens, were satisfactorily solubilized into microemulsions formulations. Meanwhile, two other natural products from Croton cajucara, trans-crotonin and acetyl aleuritolic acid, showed poor solubility in these formulations. The evaluation of the antioxidant capacity, by DPPH method, of plant extracts loaded into microemulsions evidenced the antioxidant activity of Phyllanthus amarus and Anacardium occidentale extracts. For Phyllanthus amarus extract, the use of microemulsions duplicated its antioxidant efficiency. A hydroalcoholic extract from Croton cajucara incorporated into a SMEDDS formulation showed bacteriostatic activity against colonies of Bacillus cereus and Escherichia coli bacteria. Additionally, Molecular Dynamics simulations were performed using micellar systems, for drug delivery systems, containing sugar-based surfactants, N-dodecylamino-1-deoxylactitol and N-dodecyl-D-lactosylamine. The computational simulations indicated that micellization process for N-dodecylamino-1- deoxylactitol is more favorable than N-dodecyl-D-lactosylamine system.
Resumo:
In this research the removal of light and heavy oil from disintegrated limestone was investigated with use of microemulsions. These chemical systems were composed by surfactant, cosurfactant, oil phase and aqueous phase. In the studied systems, three points in the water -rich microemulsion region of the phase diagrams were used in oil removal experiments. These microemulsion systems were characterized to evaluate the influence of particle size, surface tension, density and viscosity in micellar stability and to understand how the physical properties can influence the oil recovery process. The limestone rock sample was characterized by thermogravimetry, BET area, scanning electron microscopy and X-ray fluorescence. After preparation, the rock was placed in contact with light and heavy oil solutions to allow oil adsorption. The removal tests were performed to evaluate the influence of contact time (1 minute, 30 minutes, 60 minutes and 120 minutes), the concentration of active matter (20, 30 and 40 %), different cosurfactants and different oil phases. For the heavy oil, the best result was on SME 1, with 20 % of active matter, 1 minute of contact time, with efficiency of 93,33 %. For the light oil, also the SME 1, with 20 % of active matter, 120 minutes of contact time, with 62,38 % of efficiency. From the obtained results, it was possible to conclude that microemulsions can be considered as efficient chemical systems for oil removal from limestone formations
Resumo:
During the storage of oil, sludge is formed in the bottoms of tanks, due to decantation, since the sludge is composed of a large quantity of oil (heavy petroleum fractions), water and solids. The oil sludge is a complex viscous mixture which is considered as a hazardous waste. It is then necessary to develop methods and technologies that optimize the cleaning process, oil extraction and applications in industry. Therefore, this study aimed to determine the composition of the oil sludge, to obtain and characterize microemulsion systems (MES), and to study their applications in the treatment of sludge. In this context, the Soxhlet extraction of crude oil sludge and aged sludge was carried out, and allowing to quantify the oil (43.9 % and 84.7 % - 13 ºAPI), water (38.7 % and 9.15 %) and solid (17.3 % and 6.15 %) contents, respectively. The residues were characterized using the techniques of X-ray fluorescence (XRF), Xray diffraction (XRD) and transmission Infrared (FT-IR). The XRF technique determined the presence of iron and sulfur in higher proportions, confirming by XRD the presence of the following minerals: Pyrite (FeS2), Pyrrhotite (FeS) and Magnetite (Fe3O4). The FT-IR showed the presence of heavy oil fractions. In parallel, twelve MES were prepared, combining the following constituents: two nonionic surfactants (Unitol L90 and Renex 110 - S), three cosurfactants (butanol, sec-butanol and isoamyl alcohol - C), three aqueous phase (tap water - ADT, acidic solution 6 % HCl, and saline solution - 3.5 % NaCl - AP) and an oil phase (kerosene - OP). From the obtained systems, a common point was chosen belonging to the microemulsion region (25 % [C+S] 5 % OP and AP 70 %), which was characterized at room temperature (25°C) by viscosity (Haake Rheometer Mars), particle diameter (Zeta Plus) and thermal stability. Mixtures with this composition were applied to oil sludge solubilization under agitation at a ratio of 1:4, by varying time and temperature. The efficiencies of solubilization were obtained excluding the solids, which ranged between 73.5 % and 95 %. Thus, two particular systems were selected for use in storage tanks, with efficiencies of oil sludge solubilization over 90 %, which proved the effectiveness of the MES. The factorial design delimited within the domain showed how the MES constituents affect the solubilization of aged oil sludge, as predictive models. The MES A was chosen as the best system, which solubilized a high amount of aged crude oil sludge (~ 151.7 g / L per MES)
Resumo:
The knowledge of the rheological behavior of microemulsionated systems (SME) is of fundamental importance due to the diversity of industrial applications of these systems. This dissertation presents the rheological behavior of the microemulsionated system formed by RNX 95/alcohol isopropyl/p-toulen sodium sulfonate/kerosene/distilled water with the addition of polyacrylamide polymer. It was chosen three polymers of the polyacrylamide type, which differ in molar weight and charge density. It was studied the addition of these polymers in relatively small concentration 0,1% in mass and maximum concentration of 2,0%. It was made analysis of flow to determine the appearing viscosities of the SME and rheological parameters applying Bingham, Ostwald de Waale and Herschell-Buckley models. The behavior into saline environment of this system was studied for a solution of KCl 2,0%, replacing the distilled water. It was determined the behavior of microemulsions in relation with the temperature through curves of flow in temperatures of 25 to 60ºC in variations of 5ºC. After the analysis of the results the microemulsion without the addition of polymer presented a slight increase in its viscosity, but it does not mischaracterize it as a Newtonian fluid. However the additive systems when analyzed with low concentration of polymer adjusted well to the applied models, with a very close behavior of microemulsion. The higher concentration of the polymer gave the systems a behavior of plastic fluid. The results of the temperature variation point to an increase of viscosity in the systems that can be related to structural changes in the micelles formed in the own microemulsion without the addition of polymer
Resumo:
The Layered Double Hydroxides has become extremely promising materials due to its range of applications, easily obtained in the laboratory and reusability after calcination, so the knowledge regarding their properties is of utmost importance. In this study were synthesized layered double hydroxides of two systems, Mg-Al and Zn-Al, and such materials were analyzed with X-ray diffraction and, from these data, we determined the volume density, planar atomic density, size crystallite, lattice parameters, interplanar spacing and interlayer space available. Such materials were also subjected to thermogravimetric analysis reasons for heating 5, 10, 20 and 25 ° C / min to determine kinetic parameters for the formation of metaphases HTD and HTB based on theoretical models Ozawa, Flynn-Wall Starink and Model Free Kinetics. In addition, the layered double hydroxides synthesized in this working ratios were calcined heating 2.5 ° C / min and 20 ° C / min, and tested for adsorption of nitrate anion in aqueous solution batch system at time intervals 5 min, 15 min, 30 min, 1h, 2h and 4h. Such calcined materials were also subjected to exposure to the atmosphere and at intervals of 1 week, 2 weeks and 1 month were analyzed by infrared spectroscopy to study the kinetics of regeneration determining structural called "memory effect"
Resumo:
The environmental impact due to the improper disposal of metal-bearing industrial effluents imposes the need of wastewater treatment, since heavy metals are nonbiodegradable and hazardous substances that may cause undesirable effects to humans and the environment. The use of microemulsion systems for the extraction of metal ions from wastewaters is effective when it occurs in a Winsor II (WII) domain, where a microemulsion phase is in equilibrium with an aqueous phase in excess. However, the microemulsion phase formed in this system has a higher amount of active matter when compared to a WIII system (microemulsion in equilibrium with aqueous and oil phases both in excess). This was the reason to develop a comparative study to evaluate the efficiency of two-phases and three-phases microemulsion systems (WII and WIII) in the extraction of Cu+2 and Ni+2 from aqueous solutions. The systems were composed by: saponified coconut oil (SCO) as surfactant, n-Butanol as cosurfactant, kerosene as oil phase, and synthetic solutions of CuSO4.5H2O and NiSO4.6H2O, with 2 wt.% NaCl, as aqueous phase. Pseudoternary phase diagrams were obtained and the systems were characterized by using surface tension measurements, particle size determination and scanning electron microscopy (SEM). The concentrations of metal ions before and after extraction were determined by atomic absorption spectrometry. The extraction study of Cu+2 and Ni+2 in the WIII domain contributed to a better understanding of microemulsion extraction, elucidating the various behaviors presented in the literature for these systems. Furthermore, since WIII systems presented high extraction efficiencies, similar to the ones presented by Winsor II systems, they represented an economic and technological advantage in heavy metal extraction due to a small amount of surfactant and cosurfactant used in the process and also due to the formation of a reduced volume of aqueous phase, with high concentration of metal. Considering the reextraction process, it was observed that WIII system is more effective because it is performed in the oil phase, unlike reextraction in WII, which is performed in the aqueous phase. The presence of the metalsurfactant complex in the oil phase makes possible to regenerate only the surfactant present in the organic phase, and not all the surfactant in the process, as in WII system. This fact allows the reuse of the microemulsion phase in a new extraction process, reducing the costs with surfactant regeneration
Resumo:
Sustainable development is a major challenge in the oil industry and has aroused growing interest in research to obtain materials from renewable sources. Carboxymethylcellulose (CMC) is a polysaccharide derived from cellulose and becomes attractive because it is water-soluble, renewable, biodegradable and inexpensive, as well as may be chemically modified to gain new properties. Among the derivatives of carboxymethylcellulose, systems have been developed to induce stimuli-responsive properties and extend the applicability of multiple-responsive materials. Although these new materials have been the subject of study, understanding of their physicochemical properties, such as viscosity, solubility and particle size as a function of pH and temperature, is still very limited. This study describes systems of physical blends and copolymers based on carboxymethylcellulose and poly (N-isopropylacrylamide) (PNIPAM), with different feed percentage compositions of the reaction (25CMC, 50CMC e 75CMC), in aqueous solution. The chemical structure of the polymers was investigated by infrared and CHN elementary analysis. The physical blends were analyzed by rheology and the copolymers by UV-visible spectroscopy, small-angle X-ray scattering (SAXS), dynamic light scattering (DLS) and zeta potential. CMC and copolymer were assessed as scale inhibitors of calcium carbonate (CaCO3) using dynamic tube blocking tests and chemical compatibility tests, as well as scanning electron microscopy (SEM). Thermothickening behavior was observed for the 50 % CMC_50 % PNIPAM and 25 % CMC_75 % PNIPAM physical blends in aqueous solution at concentrations of 6 and 2 g/L, respectively, depending on polymer concentration and composition. For the copolymers, the increase in temperature and amount of PNIPAM favored polymer-polymer interactions through hydrophobic groups, resulting in increased turbidity of polymer solutions. Particle size decreased with the rise in copolymer PNIPAM content as a function of pH (3-12), at 25 °C. Larger amounts of CMC result in a stronger effect of pH on particle size, indicating pH-responsive behavior. Thus, 25CMC was not affected by the change in pH, exhibiting similar behavior to PNIPAM. In addition, the presence of acidic or basic additives influenced particle size, which was smaller in the presence of the additives than in distilled water. The results of zeta potential also showed greater variation for polymers in distilled water than in the presence of acids and bases. The lower critical solution temperature (LCST) of PNIPAM determined by DLS corroborated the value obtained by UV-visible spectroscopy. SAXS data for PNIPAM and 50CMC indicated phase transition when the temperature increased from 32 to 34 °C. A reduction in or absence of electrostatic properties was observed as a function of increased PNIPAM in copolymer composition. Assessment of samples as scale inhibitors showed that CMC performed better than the copolymers. This was attributed to the higher charge density present in CMC. The SEM micrographs confirmed morphological changes in the CaCO3 crystals, demonstrating the scale inhibiting potential of these polymers
Resumo:
It bet on the next generation of computers as architecture with multiple processors and/or multicore processors. In this sense there are challenges related to features interconnection, operating frequency, the area on chip, power dissipation, performance and programmability. The mechanism of interconnection and communication it was considered ideal for this type of architecture are the networks-on-chip, due its scalability, reusability and intrinsic parallelism. The networks-on-chip communication is accomplished by transmitting packets that carry data and instructions that represent requests and responses between the processing elements interconnected by the network. The transmission of packets is accomplished as in a pipeline between the routers in the network, from source to destination of the communication, even allowing simultaneous communications between pairs of different sources and destinations. From this fact, it is proposed to transform the entire infrastructure communication of network-on-chip, using the routing mechanisms, arbitration and storage, in a parallel processing system for high performance. In this proposal, the packages are formed by instructions and data that represent the applications, which are executed on routers as well as they are transmitted, using the pipeline and parallel communication transmissions. In contrast, traditional processors are not used, but only single cores that control the access to memory. An implementation of this idea is called IPNoSys (Integrated Processing NoC System), which has an own programming model and a routing algorithm that guarantees the execution of all instructions in the packets, preventing situations of deadlock, livelock and starvation. This architecture provides mechanisms for input and output, interruption and operating system support. As proof of concept was developed a programming environment and a simulator for this architecture in SystemC, which allows configuration of various parameters and to obtain several results to evaluate it
Resumo:
Although some individual techniques of supervised Machine Learning (ML), also known as classifiers, or algorithms of classification, to supply solutions that, most of the time, are considered efficient, have experimental results gotten with the use of large sets of pattern and/or that they have a expressive amount of irrelevant data or incomplete characteristic, that show a decrease in the efficiency of the precision of these techniques. In other words, such techniques can t do an recognition of patterns of an efficient form in complex problems. With the intention to get better performance and efficiency of these ML techniques, were thought about the idea to using some types of LM algorithms work jointly, thus origin to the term Multi-Classifier System (MCS). The MCS s presents, as component, different of LM algorithms, called of base classifiers, and realized a combination of results gotten for these algorithms to reach the final result. So that the MCS has a better performance that the base classifiers, the results gotten for each base classifier must present an certain diversity, in other words, a difference between the results gotten for each classifier that compose the system. It can be said that it does not make signification to have MCS s whose base classifiers have identical answers to the sames patterns. Although the MCS s present better results that the individually systems, has always the search to improve the results gotten for this type of system. Aim at this improvement and a better consistency in the results, as well as a larger diversity of the classifiers of a MCS, comes being recently searched methodologies that present as characteristic the use of weights, or confidence values. These weights can describe the importance that certain classifier supplied when associating with each pattern to a determined class. These weights still are used, in associate with the exits of the classifiers, during the process of recognition (use) of the MCS s. Exist different ways of calculating these weights and can be divided in two categories: the static weights and the dynamic weights. The first category of weights is characterizes for not having the modification of its values during the classification process, different it occurs with the second category, where the values suffers modifications during the classification process. In this work an analysis will be made to verify if the use of the weights, statics as much as dynamics, they can increase the perfomance of the MCS s in comparison with the individually systems. Moreover, will be made an analysis in the diversity gotten for the MCS s, for this mode verify if it has some relation between the use of the weights in the MCS s with different levels of diversity
Resumo:
In systems that combine the outputs of classification methods (combination systems), such as ensembles and multi-agent systems, one of the main constraints is that the base components (classifiers or agents) should be diverse among themselves. In other words, there is clearly no accuracy gain in a system that is composed of a set of identical base components. One way of increasing diversity is through the use of feature selection or data distribution methods in combination systems. In this work, an investigation of the impact of using data distribution methods among the components of combination systems will be performed. In this investigation, different methods of data distribution will be used and an analysis of the combination systems, using several different configurations, will be performed. As a result of this analysis, it is aimed to detect which combination systems are more suitable to use feature distribution among the components
Resumo:
Pervasive applications use context provision middleware support as infrastructures to provide context information. Typically, those applications use communication publish/subscribe to eliminate the direct coupling between components and to allow the selective information dissemination based in the interests of the communicating elements. The use of composite events mechanisms together with such middlewares to aggregate individual low level events, originating from of heterogeneous sources, in high level context information relevant for the application. CES (Composite Event System) is a composite events mechanism that works simultaneously in cooperation with several context provision middlewares. With that integration, applications use CES to subscribe to composite events and CES, in turn, subscribes to the primitive events in the appropriate underlying middlewares and notifies the applications when the composed events happen. Furthermore, CES offers a language with a group of operators for the definition of composite events that also allows context information sharing
Resumo:
The increase of capacity to integrate transistors permitted to develop completed systems, with several components, in single chip, they are called SoC (System-on-Chip). However, the interconnection subsystem cans influence the scalability of SoCs, like buses, or can be an ad hoc solution, like bus hierarchy. Thus, the ideal interconnection subsystem to SoCs is the Network-on-Chip (NoC). The NoCs permit to use simultaneous point-to-point channels between components and they can be reused in other projects. However, the NoCs can raise the complexity of project, the area in chip and the dissipated power. Thus, it is necessary or to modify the way how to use them or to change the development paradigm. Thus, a system based on NoC is proposed, where the applications are described through packages and performed in each router between source and destination, without traditional processors. To perform applications, independent of number of instructions and of the NoC dimensions, it was developed the spiral complement algorithm, which finds other destination until all instructions has been performed. Therefore, the objective is to study the viability of development that system, denominated IPNoSys system. In this study, it was developed a tool in SystemC, using accurate cycle, to simulate the system that performs applications, which was implemented in a package description language, also developed to this study. Through the simulation tool, several result were obtained that could be used to evaluate the system performance. The methodology used to describe the application corresponds to transform the high level application in data-flow graph that become one or more packages. This methodology was used in three applications: a counter, DCT-2D and float add. The counter was used to evaluate a deadlock solution and to perform parallel application. The DCT was used to compare to STORM platform. Finally, the float add aimed to evaluate the efficiency of the software routine to perform a unimplemented hardware instruction. The results from simulation confirm the viability of development of IPNoSys system. They showed that is possible to perform application described in packages, sequentially or parallelly, without interruptions caused by deadlock, and also showed that the execution time of IPNoSys is more efficient than the STORM platform
Resumo:
It is increasingly common use of a single computer system using different devices - personal computers, telephones cellular and others - and software platforms - systems graphical user interfaces, Web and other systems. Depending on the technologies involved, different software architectures may be employed. For example, in Web systems, it utilizes architecture client-server - usually extended in three layers. In systems with graphical interfaces, it is common architecture with the style MVC. The use of architectures with different styles hinders the interoperability of systems with multiple platforms. Another aggravating is that often the user interface in each of the devices have structure, appearance and behaviour different on each device, which leads to a low usability. Finally, the user interfaces specific to each of the devices involved, with distinct features and technologies is a job that needs to be done individually and not allow scalability. This study sought to address some of these problems by presenting a reference architecture platform-independent and that allows the user interface can be built from an abstract specification described in the language in the specification of the user interface, the MML. This solution is designed to offer greater interoperability between different platforms, greater consistency between the user interfaces and greater flexibility and scalability for the incorporation of new devices
Resumo:
To manage the complexity associated with the management of multimedia distributed systems, a solution must incorporate concepts of middleware in order to hide specific hardware and operating systems aspects. Applications in these systems can be implemented in different types of platforms, and the components of these systems must interact each with the other. Because of the variability of the state of the platforms implementation, a flexible approach should allow dynamic substitution of components in order to ensure the level of QoS of the running application . In this context, this work presents an approach in the layer of middleware that we are proposing for supporting dynamic substitution of components in the context the Cosmos framework , starting with the choice of target component, rising taking the decision, which, among components candidates will be chosen and concluding with the process defined for the exchange. The approach was defined considering the Cosmos QoS model and how it deals with dynamic reconfiguration
Resumo:
In this work will applied the technique of Differential Cryptanalysis, introduced in 1990 by Biham and Shamir, on Papílio s cryptosystem, developed by Karla Ramos, to test and most importantly, to prove its relevance to other block ciphers such as DES, Blowfish and FEAL-N (X). This technique is based on the analysis of differences between plaintext and theirs respective ciphertext, in search of patterns that will assist in the discovery of the subkeys and consequently in the discovery of master key. These differences are obtained by XOR operations. Through this analysis, in addition to obtaining patterns of Pap´ılio, it search to obtain also the main characteristics and behavior of Papilio throughout theirs 16 rounds, identifying and replacing when necessary factors that can be improved in accordance with pre-established definitions of the same, thus providing greater security in the use of his algoritm