816 resultados para Learning in multi-agent systems
Resumo:
An aqueous extract of Rhizophora mangle L. bark is used as raw material in pottery making in the State of Espirito Santo, Brazil. This extract presents large quantities of tannins, compounds possessing antioxidant properties. Tannin antioxidant activity, as a plant chemical defense mechanism in the process of stabilizing free radicals, has been an incentive to studies on anti-mutagenicity. The present work aimed to evaluate possible antimutagenic activity of a R. mangle aqueous extract, using the Allium cepa test-system and micronuclear (MN) assay with blockage of cytokinesis in Chinese hamster ovary cells (CHO-K1). The Allium cepa test-system indicated antimutagenic activity against the damage induced by the mutagenic agent methyl methanesulfonate. A reduction in both MN cell frequency and chromosome breaks occurred in both the pre and post-treatment protocols. The MN testing of CHO-K1 cells revealed anti-mutagenic activity of the R. mangle extract against methyl methanesulfonate and doxorubicin in pre, simultaneous and post-treatment protocols. These results suggest the presence of phyto-constituents in the extract presenting demutagenic and bio-antimutagenic activities. Since the chemical constitution of Rhizophora mangle species presents elevated tannin content, it is highly probable that these compounds are the antimutagenic promoters themselves.
Resumo:
An automatic Procedure with a high current-density anodic electrodissolution unit (HDAE) is proposed for the determination of aluminium, copper and zinc in non-ferroalloys by flame atonic absorption spectrometry, based on the direct solid analysis. It consists of solenoid valve-based commutation in a flow-injection system for on-line sample electro-dissolution and calibration with one multi-element standard, an electrolytic cell equipped with two electrodes (a silver needle acts as cathode, and sample as anode), and an intelligent unit. The latter is assembled in a PC-compatible microcomputer for instrument control, and far data acquisition and processing. General management of the process is achieved by use of software written in Pascal. Electrolyte compositions, flow rates, commutation times, applied current and electrolysis time mere investigated. A 0.5 mol l(-1) HNO3 solution was elected as electrolyte and 300 A/cm(2) as the continuous current pulse. The performance of the proposed system was evaluated by analysing aluminium in Al-allay samples, and copper/zinc in brass and bronze samples, respectively. The system handles about 50 samples per hour. Results are precise (R.S.D < 2%) and in agreement with those obtained by ICP-AES and spectrophotometry at a 95% confidence level.
Resumo:
A new strategy for minimization of Cu2+ and Pb2+ interferences on the spectrophotometric determination of Cd2+ by the Malachite green (MG)-iodide reaction using electrolytic deposition of interfering species and solid phase extraction of Cd2+ in flow system is proposed. The electrolytic cell comprises two coiled Pt electrodes concentrically assembled. When the sample solution is electrolyzed in a mixed solution containing 5% (v/v) HNO3, 0.1% (v/v) H2SO4 and 0.5 M NaCl, Cu2+ is deposited as Cu on the cathode, Pb2+ is deposited as PbO2 on the anode while Cd2+ is kept in solution. After electrolysis, the remaining solution passes through an AG1-X8 resin (chloride form) packed minicolumn in which Cd2+ is extracted as CdCl4/2-. Electrolyte compositions, flow rates, timing, applied current, and electrolysis time was investigated. With 60 s electrolysis time, 0.25 A applied current, Pb2+ and Cu2+ levels up to 50 and 250 mg 1-1, respectively, can be tolerated without interference. For 90 s resin loading time, a linear relationship between absorbance and analyte concentration in the 5.00-50.0 μg Cd 1-1 range (r2 = 0.9996) is obtained. A throughput of 20 samples per h is achieved, corresponding to about 0.7 mg MG and 500 mg KI and 5 ml sample consumed per determination. The detection limit is 0.23 μg Cd 1-1. The accuracy was checked for cadmium determination in standard reference materials, vegetables and tap water. Results were in agreement with certified values of standard reference materials and with those obtained by graphite furnace atomic absorption spectrometry at 95% confidence level. The R.S.D. for plant digests and water containing 13.0 μg Cd 1-1 was 3.85% (n = 12). The recoveries of analyte spikes added to the water and vegetable samples ranged from 94 to 104%. (C) 2000 Elsevier Science B.V.
Resumo:
The present study introduces a multi-agent architecture designed for doing automation process of data integration and intelligent data analysis. Different from other approaches the multi-agent architecture was designed using a multi-agent based methodology. Tropos, an agent based methodology was used for design. Based on the proposed architecture, we describe a Web based application where the agents are responsible to analyse petroleum well drilling data to identify possible abnormalities occurrence. The intelligent data analysis methods used was the Neural Network.
Resumo:
This paper presents a mixed-integer linear programming model to solve the problem of allocating voltage regulators and fixed or switched capacitors (VRCs) in radial distribution systems. The use of a mixed-integer linear model guarantees convergence to optimality using existing optimization software. In the proposed model, the steady-state operation of the radial distribution system is modeled through linear expressions. The results of one test system and one real distribution system are presented in order to show the accuracy as well as the efficiency of the proposed solution technique. An heuristic to obtain the Pareto front for the multiobjective VRCs allocation problem is also presented. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
In this study, a novel approach for the optimal location and contract pricing of distributed generation (DG) is presented. Such an approach is designed for a market environment in which the distribution company (DisCo) can buy energy either from the wholesale energy market or from the DG units within its network. The location and contract pricing of DG is determined by the interaction between the DisCo and the owner of the distributed generators. The DisCo intends to minimise the payments incurred in meeting the expected demand, whereas the owner of the DG intends to maximise the profits obtained from the energy sold to the DisCo. This two-agent relationship is modelled in a bilevel scheme. The upper-level optimisation is for determining the allocation and contract prices of the DG units, whereas the lower-level optimisation is for modelling the reaction of the DisCo. The bilevel programming problem is turned into an equivalent single-level mixed-integer linear optimisation problem using duality properties, which is then solved using commercially available software. Results show the robustness and efficiency of the proposed model compared with other existing models. As regards to contract pricing, the proposed approach allowed to find better solutions than those reported in previous works. © The Institution of Engineering and Technology 2013.
Resumo:
Complex networks have been employed to model many real systems and as a modeling tool in a myriad of applications. In this paper, we use the framework of complex networks to the problem of supervised classification in the word disambiguation task, which consists in deriving a function from the supervised (or labeled) training data of ambiguous words. Traditional supervised data classification takes into account only topological or physical features of the input data. On the other hand, the human (animal) brain performs both low- and high-level orders of learning and it has facility to identify patterns according to the semantic meaning of the input data. In this paper, we apply a hybrid technique which encompasses both types of learning in the field of word sense disambiguation and show that the high-level order of learning can really improve the accuracy rate of the model. This evidence serves to demonstrate that the internal structures formed by the words do present patterns that, generally, cannot be correctly unveiled by only traditional techniques. Finally, we exhibit the behavior of the model for different weights of the low- and high-level classifiers by plotting decision boundaries. This study helps one to better understand the effectiveness of the model. Copyright (C) EPLA, 2012
Resumo:
Background: We investigated whether 9p21 polymorphisms are associated with cardiovascular events in a group of 611 patients enrolled in the Medical, Angioplasty or Surgery Study II (MASS II), a randomized trial comparing treatments for patients with coronary artery disease (CAD) and preserved left ventricular function. Methods: The participants of the MASS II were genotyped for 9p21 polymorphisms (rs10757274, rs2383206, rs10757278 and rs1333049). Survival curves were calculated with the Kaplan-Meier method and compared with the log-rank statistic. We assessed the relationship between baseline variables and the composite end-point of death, death from cardiac causes and myocardial infarction using a Cox proportional hazards survival model. Results: We observed significant differences between patients within each polymorphism genotype group for baseline characteristics. The frequency of diabetes was lower in patients carrying GG genotype for rs10757274, rs2383206 and rs10757278 (29.4%, 32.8%, 32.0%) compared to patients carrying AA or AG genotypes (49.1% and 39.2%, p = 0.01; 52.4% and 40.1%, p = 0.01; 47.8% and 37.9%, p = 0.04; respectively). Significant differences in genotype frequencies between double and triple vessel disease patients were observed for the rs10757274, rs10757278 and rs1333049. Finally, there was a higher incidence of overall mortality in patients with the GG genotype for rs2383206 compared to patients with AA and AG genotypes (19.5%, 11.9%, 11.0%, respectively; p = 0.04). Moreover, the rs2383206 was still significantly associated with a 1.75-fold increased risk of overall mortality (p = 0.02) even after adjustment of a Cox multivariate model for age, previous myocardial infarction, diabetes, smoking and type of coronary anatomy. Conclusions: Our data are in accordance to previous evidence that chromosome 9p21 genetic variation may constitute a genetic modulator in the cardiovascular system in different scenarios. In patients with established CAD, we observed an association between the rs2383206 and higher incidence of overall mortality and death from cardiac causes in patients with multi-vessel CAD.
Resumo:
Competitive learning is an important machine learning approach which is widely employed in artificial neural networks. In this paper, we present a rigorous definition of a new type of competitive learning scheme realized on large-scale networks. The model consists of several particles walking within the network and competing with each other to occupy as many nodes as possible, while attempting to reject intruder particles. The particle's walking rule is composed of a stochastic combination of random and preferential movements. The model has been applied to solve community detection and data clustering problems. Computer simulations reveal that the proposed technique presents high precision of community and cluster detections, as well as low computational complexity. Moreover, we have developed an efficient method for estimating the most likely number of clusters by using an evaluator index that monitors the information generated by the competition process itself. We hope this paper will provide an alternative way to the study of competitive learning.
Resumo:
Binary and ternary systems of Ni2+, Zn2+, and Pb2+ were investigated at initial metal concentrations of 0.5, 1.0 and 2.0 mM as competitive adsorbates using Arthrospira platensis and Chlorella vulgaris as biosorbents. The experimental results were evaluated in terms of equilibrium sorption capacity and metal removal efficiency and fitted to the multi-component Langmuir and Freundlich isotherms. The pseudo second order model of Ho and McKay described well the adsorption kinetics, and the FT-IR spectroscopy confirmed metal binding to both biomasses. Ni2+ and Zn2+ interference on Pb2+ sorption was lower than the contrary, likely due to biosorbent preference to Pb. In general, the higher the total initial metal concentration, the lower the adsorption capacity. The results of this study demonstrated that dry biomass of C. vulgaris behaved as better biosorbent than A. platensis and suggest its use as an effective alternative sorbent for metal removal from wastewater. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In the last years, Intelligent Tutoring Systems have been a very successful way for improving learning experience. Many issues must be addressed until this technology can be defined mature. One of the main problems within the Intelligent Tutoring Systems is the process of contents authoring: knowledge acquisition and manipulation processes are difficult tasks because they require a specialised skills on computer programming and knowledge engineering. In this thesis we discuss a general framework for knowledge management in an Intelligent Tutoring System and propose a mechanism based on first order data mining to partially automate the process of knowledge acquisition that have to be used in the ITS during the tutoring process. Such a mechanism can be applied in Constraint Based Tutor and in the Pseudo-Cognitive Tutor. We design and implement a part of the proposed architecture, mainly the module of knowledge acquisition from examples based on first order data mining. We then show that the algorithm can be applied at least two different domains: first order algebra equation and some topics of C programming language. Finally we discuss the limitation of current approach and the possible improvements of the whole framework.
Resumo:
Many research fields are pushing the engineering of large-scale, mobile, and open systems towards the adoption of techniques inspired by self-organisation: pervasive computing, but also distributed artificial intelligence, multi-agent systems, social networks, peer-topeer and grid architectures exploit adaptive techniques to make global system properties emerge in spite of the unpredictability of interactions and behaviour. Such a trend is visible also in coordination models and languages, whenever a coordination infrastructure needs to cope with managing interactions in highly dynamic and unpredictable environments. As a consequence, self-organisation can be regarded as a feasible metaphor to define a radically new conceptual coordination framework. The resulting framework defines a novel coordination paradigm, called self-organising coordination, based on the idea of spreading coordination media over the network, and charge them with services to manage interactions based on local criteria, resulting in the emergence of desired and fruitful global coordination properties of the system. Features like topology, locality, time-reactiveness, and stochastic behaviour play a key role in both the definition of such a conceptual framework and the consequent development of self-organising coordination services. According to this framework, the thesis presents several self-organising coordination techniques developed during the PhD course, mainly concerning data distribution in tuplespace-based coordination systems. Some of these techniques have been also implemented in ReSpecT, a coordination language for tuple spaces, based on logic tuples and reactions to events occurring in a tuple space. In addition, the key role played by simulation and formal verification has been investigated, leading to analysing how automatic verification techniques like probabilistic model checking can be exploited in order to formally prove the emergence of desired behaviours when dealing with coordination approaches based on self-organisation. To this end, a concrete case study is presented and discussed.
Resumo:
This thesis presents the outcomes of a Ph.D. course in telecommunications engineering. It is focused on the optimization of the physical layer of digital communication systems and it provides innovations for both multi- and single-carrier systems. For the former type we have first addressed the problem of the capacity in presence of several nuisances. Moreover, we have extended the concept of Single Frequency Network to the satellite scenario, and then we have introduced a novel concept in subcarrier data mapping, resulting in a very low PAPR of the OFDM signal. For single carrier systems we have proposed a method to optimize constellation design in presence of a strong distortion, such as the non linear distortion provided by satellites' on board high power amplifier, then we developed a method to calculate the bit/symbol error rate related to a given constellation, achieving an improved accuracy with respect to the traditional Union Bound with no additional complexity. Finally we have designed a low complexity SNR estimator, which saves one-half of multiplication with respect to the ML estimator, and it has similar estimation accuracy.
Resumo:
The evolution of the electronics embedded applications forces electronics systems designers to match their ever increasing requirements. This evolution pushes the computational power of digital signal processing systems, as well as the energy required to accomplish the computations, due to the increasing mobility of such applications. Current approaches used to match these requirements relies on the adoption of application specific signal processors. Such kind of devices exploits powerful accelerators, which are able to match both performance and energy requirements. On the other hand, the too high specificity of such accelerators often results in a lack of flexibility which affects non-recurrent engineering costs, time to market, and market volumes too. The state of the art mainly proposes two solutions to overcome these issues with the ambition of delivering reasonable performance and energy efficiency: reconfigurable computing and multi-processors computing. All of these solutions benefits from the post-fabrication programmability, that definitively results in an increased flexibility. Nevertheless, the gap between these approaches and dedicated hardware is still too high for many application domains, especially when targeting the mobile world. In this scenario, flexible and energy efficient acceleration can be achieved by merging these two computational paradigms, in order to address all the above introduced constraints. This thesis focuses on the exploration of the design and application spectrum of reconfigurable computing, exploited as application specific accelerators for multi-processors systems on chip. More specifically, it introduces a reconfigurable digital signal processor featuring a heterogeneous set of reconfigurable engines, and a homogeneous multi-core system, exploiting three different flavours of reconfigurable and mask-programmable technologies as implementation platform for applications specific accelerators. In this work, the various trade-offs concerning the utilization multi-core platforms and the different configuration technologies are explored, characterizing the design space of the proposed approach in terms of programmability, performance, energy efficiency and manufacturing costs.