884 resultados para Logic Programming,Constraint Logic Programming,Multi-Agent Systems,Labelled LP
Resumo:
A new identification algorithm is introduced for the Hammerstein model consisting of a nonlinear static function followed by a linear dynamical model. The nonlinear static function is characterised by using the Bezier-Bernstein approximation. The identification method is based on a hybrid scheme including the applications of the inverse of de Casteljau's algorithm, the least squares algorithm and the Gauss-Newton algorithm subject to constraints. The related work and the extension of the proposed algorithm to multi-input multi-output systems are discussed. Numerical examples including systems with some hard nonlinearities are used to illustrate the efficacy of the proposed approach through comparisons with other approaches.
Resumo:
Quantitative analysis by mass spectrometry (MS) is a major challenge in proteomics as the correlation between analyte concentration and signal intensity is often poor due to varying ionisation efficiencies in the presence of molecular competitors. However, relative quantitation methods that utilise differential stable isotope labelling and mass spectrometric detection are available. Many drawbacks inherent to chemical labelling methods (ICAT, iTRAQ) can be overcome by metabolic labelling with amino acids containing stable isotopes (e.g. 13C and/or 15N) in methods such as Stable Isotope Labelling with Amino acids in Cell culture (SILAC). SILAC has also been used for labelling of proteins in plant cell cultures (1) but is not suitable for whole plant labelling. Plants are usually autotrophic (fixing carbon from atmospheric CO2) and, thus, labelling with carbon isotopes becomes impractical. In addition, SILAC is expensive. Recently, Arabidopsis cell cultures were labelled with 15N in a medium containing nitrate as sole nitrogen source. This was shown to be suitable for quantifying proteins and nitrogen-containing metabolites from this cell culture (2,3). Labelling whole plants, however, offers the advantage of studying quantitatively the response to stimulation or disease of a whole multicellular organism or multi-organism systems at the molecular level. Furthermore, plant metabolism enables the use of inexpensive labelling media without introducing additional stress to the organism. And finally, hydroponics is ideal to undertake metabolic labelling under extremely well-controlled conditions. We demonstrate the suitability of metabolic 15N hydroponic isotope labelling of entire plants (HILEP) for relative quantitative proteomic analysis by mass spectrometry. To evaluate this methodology, Arabidopsis plants were grown hydroponically in 14N and 15N media and subjected to oxidative stress.
Resumo:
Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.
Resumo:
In this work we construct the stationary measure of the N species totally asymmetric simple exclusion process in a matrix product formulation. We make the connection between the matrix product formulation and the queueing theory picture of Ferrari and Martin. In particular, in the standard representation, the matrices act on the space of queue lengths. For N > 2 the matrices in fact become tensor products of elements of quadratic algebras. This enables us to give a purely algebraic proof of the stationary measure which we present for N=3.
Resumo:
The Grid is a large-scale computer system that is capable of coordinating resources that are not subject to centralised control, whilst using standard, open, general-purpose protocols and interfaces, and delivering non-trivial qualities of service. In this chapter, we argue that Grid applications very strongly suggest the use of agent-based computing, and we review key uses of agent technologies in Grids: user agents, able to customize and personalise data; agent communication languages offering a generic and portable communication medium; and negotiation allowing multiple distributed entities to reach service level agreements. In the second part of the chapter, we focus on Grid service discovery, which we have identified as a prime candidate for use of agent technologies: we show that Grid-services need to be located via personalised, semantic-rich discovery processes, which must rely on the storage of arbitrary metadata about services that originates from both service providers and service users. We present UDDI-MT, an extension to the standard UDDI service directory approach that supports the storage of such metadata via a tunnelling technique that ties the metadata store to the original UDDI directory. The outcome is a flexible service registry which is compatible with existing standards and also provides metadata-enhanced service discovery.
Resumo:
The IST-CONTRACT project is in the process of creating an electronic contracting language. One of the goals of this language is that it has formal underpinnings, and formalizations at a number of levels have been created. One of the lowest levels, upon which the other levels are built is the normative level. At this level, we identify how contract clauses (modeled as norms) may evolve over time. In this paper, we describe this formalization, and show how we may associate various states with a norm throughout its lifecycle. We also show how more complex evaluations may be carried out over a norm, and conclude with an example showing the application of the framework over a contract and its associated norms.
Resumo:
In Brazil and around the world, oil companies are looking for, and expected development of new technologies and processes that can increase the oil recovery factor in mature reservoirs, in a simple and inexpensive way. So, the latest research has developed a new process called Gas Assisted Gravity Drainage (GAGD) which was classified as a gas injection IOR. The process, which is undergoing pilot testing in the field, is being extensively studied through physical scale models and core-floods laboratory, due to high oil recoveries in relation to other gas injection IOR. This process consists of injecting gas at the top of a reservoir through horizontal or vertical injector wells and displacing the oil, taking advantage of natural gravity segregation of fluids, to a horizontal producer well placed at the bottom of the reservoir. To study this process it was modeled a homogeneous reservoir and a model of multi-component fluid with characteristics similar to light oil Brazilian fields through a compositional simulator, to optimize the operational parameters. The model of the process was simulated in GEM (CMG, 2009.10). The operational parameters studied were the gas injection rate, the type of gas injection, the location of the injector and production well. We also studied the presence of water drive in the process. The results showed that the maximum vertical spacing between the two wells, caused the maximum recovery of oil in GAGD. Also, it was found that the largest flow injection, it obtained the largest recovery factors. This parameter controls the speed of the front of the gas injected and determined if the gravitational force dominates or not the process in the recovery of oil. Natural gas had better performance than CO2 and that the presence of aquifer in the reservoir was less influential in the process. In economic analysis found that by injecting natural gas is obtained more economically beneficial than CO2
Resumo:
The industrial automation is directly linked to the development of information tecnology. Better hardware solutions, as well as improvements in software development methodologies make possible the rapid growth of the productive process control. In this thesis, we propose an architecture that will allow the joining of two technologies in hardware (industrial network) and software field (multiagent systems). The objective of this proposal is to join those technologies in a multiagent architecture to allow control strategies implementations in to field devices. With this, we intend develop an agents architecture to detect and solve problems which may occur in the industrial network environment. Our work ally machine learning with industrial context, become proposed multiagent architecture adaptable to unfamiliar or unexpected production environment. We used neural networks and presented an allocation strategies of these networks in industrial network field devices. With this we intend to improve decision support at plant level and allow operations human intervention independent
Resumo:
In this work, we propose the Interperception paradigm, a new approach that includes a set of rules and a software architecture for merge users from different interfaces in the same virtual environment. The system detects the user resources and provide transformations on the data in order to allow its visualization in 3D, 2D and textual (1D) interfaces. This allows any user to connect, access information, and exchange information with other users in a feasible way, without needs of changing hardware or software. As results are presented two virtual environments builded acording this paradigm